perm filename PHILSC[JNK,JMC]1 blob sn#701678 filedate 1983-02-03 generic text, type C, neo UTF8
COMMENT ⊗   VALID 00421 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00061 00002	∂12-Jan-83  0024	Carl Hewitt <Hewitt at MIT-OZ at MIT-MC> 	Popper  
C00071 00003	∂12-Jan-83  0026	LEVITT @ MIT-MC 	Chaos vs complexity    
C00073 00004	∂12-Jan-83  0057	GAVAN @ MIT-MC 	Popper   
C00079 00005	∂12-Jan-83  0107	GAVAN @ MIT-MC 	Chaos vs complexity
C00083 00006	∂11-Jan-83  2345	GAVAN @ MIT-MC 	Scientific Community Metaphor compatible with Society of Mind?  
C00091 00007	∂12-Jan-83  0011	GAVAN @ MIT-MC 	Scientific-Engineering Community Metaphor compatible with Society of the Mind? 
C00101 00008	∂12-Jan-83  0820	DAM @ MIT-MC 	Occum's razor   
C00105 00009	∂12-Jan-83  0853	DAM @ MIT-MC 	Goals 
C00108 00010	∂12-Jan-83  0946	MINSKY @ MIT-MC 	Occum's razor
C00112 00011	∂12-Jan-83  1048	GAVAN @ MIT-MC 	Occum's razor 
C00124 00012	∂12-Jan-83  1106	DAM @ MIT-MC 	Occum's razor   
C00127 00013	∂12-Jan-83  1121	MINSKY @ MIT-MC 	Occum's razor
C00129 00014	∂12-Jan-83  1208	HEWITT @ MIT-XX 	Occum's razor
C00133 00015	∂12-Jan-83  1222	BATALI @ MIT-MC 	Goals   
C00135 00016	∂12-Jan-83  1241	DAM @ MIT-MC 	Occum's razor   
C00137 00017	∂12-Jan-83  1327	GAVAN @ MIT-MC 	Occum's razor 
C00139 00018	∂12-Jan-83  1612	GAVAN @ MIT-MC 	Occum's razor 
C00147 00019	∂12-Jan-83  1652	LEVITT @ MIT-MC 	Chaos vs complexity    
C00149 00020	∂12-Jan-83  1907	MINSKY at MIT-OZ at MIT-MC 	OCcam's razor    
C00153 00021	∂12-Jan-83  1918	BATALI @ MIT-MC 	rat psychology    
C00156 00022	∂12-Jan-83  1919	MINSKY @ MIT-MC 	Occum's razor
C00158 00023	∂12-Jan-83  2107	GAVAN @ MIT-MC 	Chaos vs complexity
C00161 00024	∂12-Jan-83  2124	Carl Hewitt <Hewitt at MIT-OZ at MIT-MC> 	reformulation
C00164 00025	∂12-Jan-83  2148	GAVAN @ MIT-MC 	rat psychology
C00170 00026	∂12-Jan-83  2153	Carl Hewitt <Hewitt at MIT-OZ at MIT-MC> 	Scientific-Engineering Community Metaphor compatible with Society of the Mind?
C00178 00027	∂12-Jan-83  2224	KDF @ MIT-MC 	reformulation   
C00179 00028	∂12-Jan-83  2234	KDF @ MIT-MC 	Confounding
C00181 00029	∂12-Jan-83  2249	Carl Hewitt <Hewitt at MIT-OZ> 	semantics for reasoning
C00187 00030	∂12-Jan-83  2324	GAVAN @ MIT-MC 	Scientific-Engineering Community Metaphor compatible with Society of the Mind? 
C00201 00031	∂12-Jan-83  2333	GAVAN @ MIT-MC 	Confounding   
C00209 00032	∂12-Jan-83  2356	GAVAN @ MIT-MC 	reformulation 
C00211 00033	∂13-Jan-83  0042	JCMa@MIT-OZ at MIT-MC 	Popper, again.   
C00213 00034	∂13-Jan-83  0057	JCMa@MIT-OZ at MIT-MC 	Max Planck's view
C00215 00035	∂13-Jan-83  0119	JCMa@MIT-OZ at MIT-MC 	Solomonoff-Kolmogoroff theory   
C00217 00036	∂13-Jan-83  0148	JCMa@MIT-OZ at MIT-MC 	Confounding 
C00221 00037	∂13-Jan-83  0308	ISAACSON at USC-ISI 	Peirce for message passing semantics   
C00222 00038	∂13-Jan-83  0717	MINSKY @ MIT-MC 	reformulation
C00223 00039	∂13-Jan-83  0825	BATALI @ MIT-MC 	Science vs Perception  
C00227 00040	∂13-Jan-83  0831	BAK @ MIT-MC 	Popper, again.  
C00231 00041	∂13-Jan-83  0928	MINSKY @ MIT-MC 	Popper, again.    
C00237 00042	∂13-Jan-83  0940	DAM @ MIT-MC 	Statement, Truth, and Entailment,   
C00241 00043	∂13-Jan-83  0956	MINSKY @ MIT-MC 	Statement, Truth, and Entailment,
C00245 00044	∂13-Jan-83  1012	DAM @ MIT-MC 	Doing as as Test for Cognitive Theories. 
C00248 00045	∂13-Jan-83  1114	BATALI @ MIT-MC 	Solomonov Papers  
C00250 00046	∂13-Jan-83  1119	DAM @ MIT-MC 	Statement, Truth, and Entailment    
C00255 00047	∂13-Jan-83  1124	DAM @ MIT-MC 	Statement, Truth, and Entailment    
C00257 00048	∂13-Jan-83  1252	William A. Kornfeld <BAK at MIT-OZ at MIT-MC> 	the real numbers  
C00258 00049	∂13-Jan-83  1256	MINSKY @ MIT-MC 	Statement, Truth, and Entailment 
C00262 00050	∂13-Jan-83  1258	DAM @ MIT-MC 	Statement, Truth, and Entailment    
C00266 00051	∂13-Jan-83  1301	BATALI @ MIT-MC 	Doing as as Test for Cognitive Theories.   
C00269 00052	∂13-Jan-83  1308	ISAACSON at USC-ISI 	Real numbers stuff 
C00271 00053	∂13-Jan-83  1317	DAM @ MIT-MC 	Statement, Truth, and Entailment    
C00272 00054	∂13-Jan-83  1332	GAVAN @ MIT-MC 	reformulation 
C00274 00055	∂13-Jan-83  1407	KDF @ MIT-MC 	Popper, again.  
C00276 00056	∂13-Jan-83  1432	GAVAN @ MIT-MC 	Science vs Perception, a false dichotomy    
C00285 00057	∂13-Jan-83  1523	HEWITT @ MIT-XX 	Peirce for message passing semantics? 
C00286 00058	∂13-Jan-83  1547	DAM @ MIT-MC 	non-monotonic logic  
C00288 00059	∂13-Jan-83  1612	BATALI @ MIT-MC 	Science vs Perception, a TRUE dichotomy    
C00291 00060	∂13-Jan-83  1733	GAVAN @ MIT-MC 	Science vs Perception, a LUDICROUS dichotomy
C00297 00061	∂13-Jan-83  1747	GAVAN @ MIT-MC 	Popper, again.
C00299 00062	∂13-Jan-83  1829	GAVAN @ MIT-MC 	Peirce for message passing pragmatics! 
C00301 00063	∂13-Jan-83  1832	ISAACSON at USC-ISI 	Peirce for maessage passing pragmatics!
C00303 00064	∂13-Jan-83  1849	MINSKY @ MIT-MC 	Solomonoff et alia
C00304 00065	∂13-Jan-83  1908	GAVAN @ MIT-MC 	Solomonoff et alia 
C00306 00066	∂13-Jan-83  2022	GAVAN @ MIT-MC 	Solomonoff et alia 
C00308 00067	∂13-Jan-83  2122	ISAACSON at USC-ISI 	Re:  Peirce for message passing pragmatics! 
C00321 00068	∂14-Jan-83  0116	John McCarthy <JMC@SU-AI>
C00331 00069	∂14-Jan-83  0202	GAVAN @ MIT-MC 	theories of truth  
C00339 00070	∂14-Jan-83  0250	KDF @ MIT-MC 	Confounding
C00342 00071	∂14-Jan-83  0322	JCMa@MIT-OZ at MIT-MC 	Peirce for message passing semantics?
C00344 00072	∂14-Jan-83  0340	JCMa@MIT-OZ at MIT-MC 	Statement, Truth, and Entailment
C00347 00073	∂14-Jan-83  0342	GAVAN @ MIT-MC 	consensus
C00348 00074	∂14-Jan-83  0447	JCMa@MIT-OZ at MIT-MC 	Approximation Theory of Truth: Re: Theories Of Truth
C00352 00075	∂14-Jan-83  0448	JCMa@MIT-OZ at MIT-MC 	Subject and In-Reply-To fields in messages
C00353 00076	∂14-Jan-83  0952	DAM @ MIT-MC 	Consensus Theory of Truth 
C00358 00077	∂14-Jan-83  1011	John McCarthy <JMC@SU-AI> 	consensus theory of truth        
C00360 00078	∂14-Jan-83  1339	KDF @ MIT-MC 	Interaction between theory and observation    
C00366 00079	∂14-Jan-83  1349	KDF @ MIT-MC 	Reductionism    
C00368 00080	∂14-Jan-83  1519	DAM @ MIT-MC 	consensus theory of truth 
C00370 00081	∂14-Jan-83  1818	Carl Hewitt <Hewitt at MIT-OZ at MIT-ML> 	Peirce for message passing semantics? 
C00373 00082	∂14-Jan-83  1848	Carl Hewitt <Hewitt at MIT-OZ at MIT-MC> 	Confounding  
C00377 00083	∂14-Jan-83  1853	John McCarthy <JMC@SU-AI> 	Consensus theory of truth   
C00380 00084	∂14-Jan-83  1931	Carl Hewitt <Hewitt at MIT-OZ at MIT-MC> 	Scientific-Engineering Community Metaphor compatible with Society of the Mind?
C00395 00085	∂14-Jan-83  1943	Carl Hewitt <Hewitt at MIT-OZ at MIT-MC> 	Statement, Truth, and Entailment,
C00400 00086	∂14-Jan-83  2009	Carl Hewitt <Hewitt at MIT-OZ at MIT-MC> 	The smallest description of the past is the best theory for the future?  
C00404 00087	∂14-Jan-83  2241	Carl Hewitt <Hewitt at MIT-OZ at MIT-MC> 	Consensus Theory of Truth   
C00410 00088	∂15-Jan-83  0642	GAVAN @ MIT-MC 	Scientific-Engineering Community Metaphor compatible with Society of the Mind? 
C00423 00089	∂15-Jan-83  1329	DAM @ MIT-MC 	Consensus theory of truth      
C00426 00090	∂15-Jan-83  1355	DAM @ MIT-MC 	Solomonoff 
C00429 00091	∂15-Jan-83  1421	DAM @ MIT-MC 	Consensus Theory of Truth 
C00435 00092	∂15-Jan-83  1433	John McCarthy <JMC@SU-AI> 	correspondence theory of truth   
C00444 00093	∂15-Jan-83  1436	DAM @ MIT-MC 	Statement, Truth, and Entailment    
C00447 00094	∂15-Jan-83  1522	MINSKY @ MIT-MC 	Solomonoff and RElativity, etc.  
C00452 00095	∂15-Jan-83  1527	MINSKY @ MIT-MC 	correspondence theory of truth   
C00456 00096	∂15-Jan-83  1647	DAM @ MIT-MC 	Solomonoff and RElativity, etc.
C00467 00097	∂15-Jan-83  2138	John McCarthy <JMC@SU-AI> 	correspondence model of truth    
C00474 00098	∂15-Jan-83  2142	John McCarthy <JMC@SU-AI> 	consensus theory of truth and Solomonoff et al. 
C00478 00099	∂15-Jan-83  2148	KDF @ MIT-MC 	Solomonoff 
C00479 00100	∂15-Jan-83  2153	DAM @ MIT-MC 	correspondence theory of truth - Circumscription and Occam   
C00483 00101	∂15-Jan-83  2224	HEWITT @ MIT-OZ 	Truth-Theoretic Semantics different from Message Passing Semantics  
C00487 00102	∂15-Jan-83  2240	ISAACSON at USC-ISI 	"Obstacles-and-Roofs" Worlds 
C00490 00103	∂15-Jan-83  2244	HEWITT @ MIT-OZ 	Consensus Theory of Truth   
C00497 00104	∂15-Jan-83  2336	John McCarthy <JMC@SU-AI> 	"Obstacles-and-Roofs" Worlds
C00498 00105	∂15-Jan-83  2325	KDF @ MIT-MC 	Truth-Theoretic Semantics different from Message Passing Semantics
C00500 00106	∂16-Jan-83  0151	ISAACSON at USC-ISI 	"Obstacles-and-Roofs" Machines    
C00502 00107	∂16-Jan-83  0446	GAVAN @ MIT-MC 	"Truth" as coherence, consensus, correspondence, and simplicity.
C00509 00108	∂16-Jan-83  0506	GAVAN @ MIT-MC 	meta-epistemology and the God's-eye view.   
C00510 00109	∂16-Jan-83  0515	GAVAN @ MIT-MC 	Truth-Theoretic Semantics different from Message Passing Semantics   
C00512 00110	∂16-Jan-83  0541	GAVAN @ MIT-MC 	consensus theory of truth    
C00515 00111	∂16-Jan-83  0817	ISAACSON at USC-ISI 	A note on coherence
C00518 00112	∂16-Jan-83  1213	John McCarthy <JMC@SU-AI> 	correspondence theory of truth   
C00528 00113	∂16-Jan-83  1305	DAM @ MIT-MC 	Truth-Theoretic Semantics different from Message Passing Semantics
C00531 00114	∂16-Jan-83  1315	DAM @ MIT-MC 	Objectivity of Mathematics
C00533 00115	∂16-Jan-83  1359	DAM @ MIT-MC 	Consensus Theory of Truth 
C00536 00116	∂16-Jan-83  1506	John McCarthy <JMC@SU-AI>
C00538 00117	∂16-Jan-83  1540	BATALI @ MIT-MC 	Consensus Theory of Truth   
C00545 00118	∂16-Jan-83  1600	ISAACSON at USC-ISI 	"O&R" machines
C00547 00119	∂16-Jan-83  1648	HEWITT @ MIT-OZ 	theories of meaning    
C00550 00120	∂16-Jan-83  1706	John McCarthy <JMC@SU-AI> 	theories of meaning    
C00553 00121	∂16-Jan-83  1710	DAM @ MIT-MC 	Consensus Theory of Truth 
C00561 00122	∂16-Jan-83  1712	DAM @ MIT-MC 	Occam's Razor   
C00563 00123	∂16-Jan-83  1757	John Batali <Batali at MIT-OZ> 	theories of meaning    
C00568 00124	∂16-Jan-83  1901	KDF @ MIT-MC 	theories of meaning  
C00569 00125	∂17-Jan-83  0105	John McCarthy <JMC@SU-AI> 	verificationism        
C00572 00126	∂17-Jan-83  0108	KDF @ MIT-MC 	Truth-Theoretic Semantics different from Message Passing Semantics
C00575 00127	∂17-Jan-83  0202	GAVAN @ MIT-MC 	Consensus Theory of Truth    
C00583 00128	∂17-Jan-83  0216	GAVAN @ MIT-MC 	Occam's Razor 
C00586 00129	∂17-Jan-83  0234	GAVAN @ MIT-MC 	A note on coherence
C00590 00130	∂17-Jan-83  0250	philosophy-of-science-request@MIT-MC 	List Info   
C00591 00131	∂17-Jan-83  0251	GAVAN @ MIT-MC 	Truth-Theoretic Semantics different from Message Passing Semantics   
C00596 00132	∂17-Jan-83  0715	BATALI @ MIT-MC 	verificationism        
C00598 00133	∂17-Jan-83  0746	BATALI @ MIT-MC 	Consensus Theory of Truth   
C00604 00134	∂17-Jan-83  0805	GAVAN @ MIT-MC 	verificationism and correspondence
C00606 00135	∂17-Jan-83  1240	DAM @ MIT-MC 	Consensus Theory of Truth 
C00609 00136	∂17-Jan-83  1322	John McCarthy <JMC@SU-AI> 	verificationism and correspondence    
C00610 00137	∂17-Jan-83  1322	DAM @ MIT-MC 	Solomonoff et. al.   
C00613 00138	∂17-Jan-83  1408	GAVAN @ MIT-MC 	Solomonoff et. al. 
C00618 00139	∂17-Jan-83  1440	John McCarthy <JMC@SU-AI> 	Lakatos and Solomonoff      
C00620 00140	∂17-Jan-83  1447	GAVAN @ MIT-MC 	correspondence theory of truth    
C00633 00141	∂17-Jan-83  1512	BATALI @ MIT-MC 	correspondence theory of truth   
C00635 00142	∂17-Jan-83  1518	BATALI @ MIT-MC 	Consensus Theory of Truth   
C00639 00143	∂17-Jan-83  1821	ISAACSON at USC-ISI 	Non-technical Chaitin's papers    
C00641 00144	∂17-Jan-83  2149	Carl Hewitt <Hewitt at MIT-OZ at MIT-MC> 	verificationism        
C00645 00145	∂17-Jan-83  2216	Carl Hewitt <Hewitt at MIT-OZ at MIT-MC> 	Objectivity of Mathematics  
C00650 00146	∂17-Jan-83  2239	John McCarthy <JMC@SU-AI> 	Correspondence theory of truth and meta-epistemology 
C00655 00147	∂17-Jan-83  2315	Carl Hewitt <Hewitt at MIT-OZ at MIT-MC> 	The smallest description of the past is the best theory for the future?  
C00658 00148	∂18-Jan-83  0113	MINSKY @ MIT-MC 	The smallest description of the past is the best theory for the future?  
C00660 00149	∂18-Jan-83  0637	GAVAN @ MIT-MC 	Non-technical Chaitin's papers    
C00662 00150	∂18-Jan-83  0637	GAVAN @ MIT-MC 	correspondence theory of truth    
C00667 00151	∂18-Jan-83  0749	Carl Hewitt <Hewitt at MIT-OZ at MIT-MC> 	The smallest description of the past is the best theory for the future?  
C00671 00152	∂18-Jan-83  1214	John McCarthy <JMC@SU-AI> 	Correspondence theory of truth   
C00673 00153	∂18-Jan-83  1316	Gavan Duffy <GAVAN at MIT-OZ at MIT-MC> 	Correspondence theory of truth    
C00678 00154	∂18-Jan-83  1352	Jon Amsterdam <JBA at MIT-OZ> 	The smallest description of the past is the best theory for the future?   
C00682 00155	∂18-Jan-83  1448	Gavan Duffy <GAVAN at MIT-OZ at MIT-MC> 	The smallest description of the past is the best theory for the future?   
C00685 00156	∂18-Jan-83  1448	Gavan Duffy <GAVAN at MIT-OZ> 	The smallest description of the past is the best theory for the future?   
C00690 00157	∂18-Jan-83  1506	John McCarthy <JMC@SU-AI> 	correspondence theory  
C00692 00158	∂18-Jan-83  1515	Gavan Duffy <GAVAN at MIT-OZ at MIT-MC> 	Lakatos and Solomonoff  
C00694 00159	∂18-Jan-83  1521	Gavan Duffy <GAVAN at MIT-OZ at MIT-MC> 	Consensus Theory of Truth    
C00702 00160	∂18-Jan-83  1549	GAVAN @ MIT-MC 	correspondence theory   
C00706 00161	∂18-Jan-83  1827	PHIL-SCI-REQUEST@MIT-MC 	List Info:  Distributed Indexation 
C00709 00162	∂18-Jan-83  1941	CSD.BRODER@SU-SCORE (SuNet)  	Next AFLB talk(s)   
C00713 00163	∂18-Jan-83  2128	ISAACSON at USC-ISI 	Summaries, please ...   
C00719 00164	∂18-Jan-83  2300	John McCarthy <JMC@SU-AI> 	Correspondence theory  
C00722 00165	∂19-Jan-83  0203	GAVAN @ MIT-MC 	Putnam, Life Worlds, Real Worlds, Natural Language, and Natural Numbers.  
C00732 00166	∂19-Jan-83  1037	DAM @ MIT-MC 	Consensus Theory of Truth 
C00737 00167	∂19-Jan-83  1119	DAM @ MIT-MC 	The smallest description of the past is the best theory for the future?
C00742 00168	∂19-Jan-83  1158	MINSKY @ MIT-MC 	The smallest description of the past is the best theory for the future?  
C00746 00169	∂19-Jan-83  1516	DAM @ MIT-MC 	The smallest description of the past is the best theory for the future?
C00751 00170	∂19-Jan-83  1620	ISAACSON at USC-ISI 	More on "O&R" machines  
C00757 00171	∂19-Jan-83  1730	John Batali <Batali at MIT-OZ at MIT-MC> 	Solomonov    
C00762 00172	∂19-Jan-83  2008	MINSKY @ MIT-MC 	Solomonov    
C00766 00173	∂20-Jan-83  0227	John McCarthy <JMC@SU-AI> 	Lakatos review, Putnam, and Solomonoff (or even Solomonov)
C00774 00174	∂20-Jan-83  0516	GAVAN @ MIT-MC 	Lakatos review, Putnam. 
C00785 00175	∂20-Jan-83  0856	DAM @ MIT-MC 	Solomonoff 
C00787 00176	∂20-Jan-83  0931	DAM @ MIT-MC 	Mathematical Terminology  
C00791 00177	∂20-Jan-83  1127	John McCarthy <JMC@SU-AI>
C00794 00178	∂20-Jan-83  1132	GAVAN @ MIT-MC 	Mathematical Terminology
C00804 00179	∂20-Jan-83  1441	John Batali <Batali at MIT-OZ at MIT-MC> 	Solomonoff   
C00810 00180	∂20-Jan-83  1454	John Batali <Batali at MIT-OZ at MIT-MC> 	The Social Sciences    
C00813 00181	∂20-Jan-83  1551	DAM @ MIT-MC 	Randomness 
C00818 00182	∂20-Jan-83  2054	JCMa@MIT-OZ 	The Social Sciences   
C00828 00183	∂21-Jan-83  1336	MINSKY @ MIT-MC 	Randomness   
C00839 00184	∂21-Jan-83  1345	GAVAN @ MIT-MC 	The Social Sciences
C00845 00185	∂21-Jan-83  1345	MINSKY @ MIT-MC 	Randomness   
C00847 00186	∂21-Jan-83  1345	GAVAN @ MIT-MC 	The smallest description of the past is the best theory for the future?   
C00849 00187	∂21-Jan-83  1348	MINSKY @ MIT-MC 	Learning Meaning  
C00851 00188	∂21-Jan-83  1354	GAVAN @ MIT-MC 
C00856 00189	∂21-Jan-83  1419	GAVAN @ MIT-MC 	Is there a mathematician in the house? 
C00859 00190	∂21-Jan-83  1548	ISAACSON at USC-ISI 	Re:  Learning Meaning   
C00861 00191	∂21-Jan-83  1720	BAK @ MIT-MC 	Hewitt's claim  
C00866 00192	∂21-Jan-83  1915	MINSKY @ MIT-MC 	Is there a mathematician in the house?
C00868 00193	∂21-Jan-83  2152	John McCarthy <JMC@SU-AI> 	correspondence theory  
C00870 00194	∂21-Jan-83  2158	MINSKY @ MIT-MC 	Hewitt's claim    
C00874 00195	∂21-Jan-83  2211	MINSKY @ MIT-MC 	correspondence theory  
C00876 00196	∂21-Jan-83  2252	John McCarthy <JMC@SU-AI>
C00882 00197	∂21-Jan-83  2305	John McCarthy <JMC@SU-AI>
C00884 00198	∂21-Jan-83  2308	BAK @ MIT-MC 	Hewitt's claim  
C00887 00199	∂22-Jan-83  0523	MINSKY @ MIT-MC 	Hewitt's claim    
C00890 00200	∂22-Jan-83  1031	MINSKY at MIT-OZ at MIT-MC 	A theory.   
C00892 00201	∂22-Jan-83  1037	MINSKY at MIT-OZ at MIT-MC 	A theory.   
C00911 00202	∂22-Jan-83  1251	John McCarthy <JMC@SU-AI>
C00915 00203	∂22-Jan-83  1328	MINSKY @ MIT-MC
C00918 00204	∂22-Jan-83  1425	DAM @ MIT-MC 	Hewitt's claim and Church' thesis   
C00921 00205	∂22-Jan-83  1438	BAK @ MIT-MC 	Hewitt's claim  
C00923 00206	∂22-Jan-83  1724	DAM @ MIT-MC 	Objectivity in Mathematics (Minsky's Theory)  
C00929 00207	∂22-Jan-83  1907	MINSKY @ MIT-MC 	Objectivity in Mathematics (Minsky's Theory)    
C00932 00208	∂22-Jan-83  1917	William A. Kornfeld <BAK at MIT-OZ at MIT-MC> 	Sparseness theory 
C00936 00209	∂23-Jan-83  0124	John McCarthy <JMC@SU-AI>
C00940 00210	∂23-Jan-83  0410	GAVAN @ MIT-MC 	correspondence theory   
C00943 00211	∂23-Jan-83  0515	GAVAN @ MIT-MC 
C00951 00212	∂23-Jan-83  1125	DAM @ MIT-MC 	Minsky's Theory 
C00954 00213	∂23-Jan-83  1128	DAM @ MIT-MC 	Minsky's Theory 
C00957 00214	∂23-Jan-83  1149	DAM @ MIT-MC 	Corrospondence Theory
C00959 00215	∂23-Jan-83  1347	ISAACSON at USC-ISI 	Re:  Minsky's Theory    
C00962 00216	∂23-Jan-83  1840	MONTALVO@HP-HULK@HP-VENUS@RAND-RELAY 	Re: Summaries, please ...  
C00964 00217	∂23-Jan-83  1923	BATALI @ MIT-MC 	Correspondence    
C00966 00218	∂23-Jan-83  2052	MINSKY @ MIT-MC 	Minsky's Theory   
C00970 00219	∂23-Jan-83  2056	MINSKY @ MIT-MC 	Minsky's Theory   
C00973 00220	∂23-Jan-83  2111	MINSKY @ MIT-MC 	Minsky's Theory   
C00975 00221	∂24-Jan-83  0130	KDF @ MIT-MC 	Minsky's Theory 
C00977 00222	∂24-Jan-83  0139	GAVAN @ MIT-MC 	Correspondence, Coherence, and Consensus    
C00982 00223	∂24-Jan-83  0155	GAVAN @ MIT-MC 	Corrospondence Theory   
C00985 00224	∂24-Jan-83  0205	GAVAN @ MIT-MC 	Minsky's Theory    
C00988 00225	∂24-Jan-83  0422	John McCarthy <JMC@SU-AI> 	objective physical and mathematical worlds 
C00996 00226	∂24-Jan-83  0731	John Batali <Batali at MIT-OZ at MIT-MC> 	Pragmatics   
C01001 00227	∂24-Jan-83  0859	GAVAN @ MIT-MC 	Pragmatics    
C01013 00228	∂24-Jan-83  0951	John Batali <Batali at MIT-OZ at MIT-MC> 	Pragmatics   
C01020 00229	∂24-Jan-83  1049	GAVAN @ MIT-MC 	Pragmatics    
C01029 00230	∂24-Jan-83  1216	GAVAN @ MIT-MC 	subjective physical and mathematical worlds 
C01038 00231	∂24-Jan-83  1400	DAM @ MIT-MC 	The Objectivity of Mathematics 
C01039 00232	∂24-Jan-83  1406	GAVAN @ MIT-MC 	The Objectivity of Mathematics    
C01041 00233	∂24-Jan-83  1415	DAM @ MIT-MC 	objective physical and mathematical worlds    
C01044 00234	∂24-Jan-83  1426	DAM @ MIT-MC 	correction 
C01045 00235	∂24-Jan-83  1431	John McCarthy <JMC@SU-AI> 	"your version of reality"   
C01049 00236	∂24-Jan-83  1517	John McCarthy <JMC@SU-AI> 	correspondence theory  
C01052 00237	∂24-Jan-83  1550	MINSKY @ MIT-MC 	The Objectivity of Mathematics   
C01054 00238	∂24-Jan-83  1554	DAM @ MIT-MC 	Objectivity of Mathematics
C01058 00239	∂24-Jan-83  1554	DAM @ MIT-MC 	The Objectivity of Mathematics 
C01060 00240	∂24-Jan-83  1553	DAM @ MIT-MC 	The Objectivity of Mathematics 
C01066 00241	∂24-Jan-83  1553	DAM @ MIT-MC 	The Objectivity of Mathematics 
C01068 00242	∂24-Jan-83  1631	KDF @ MIT-MC 	The Objectivity of Mathematics 
C01071 00243	∂24-Jan-83  1658	ISAACSON at USC-ISI 	Re:  The objectivity of mathematics    
C01074 00244	∂24-Jan-83  1700	ISAACSON at USC-ISI 	Re:  The objectivity of discussing mathematics   
C01076 00245	∂25-Jan-83  1103	DAM @ MIT-MC 	The Objectivity of Mathematics 
C01085 00246	∂25-Jan-83  1104	DAM @ MIT-MC 	The Objectivity of Mathematics 
C01087 00247	∂25-Jan-83  1104	DAM @ MIT-MC 	Objectivity of Mathematics
C01091 00248	∂25-Jan-83  1114	MINSKY @ MIT-MC 	The Objectivity of Mathematics   
C01095 00249	∂25-Jan-83  1353	GAVAN @ MIT-MC 	"your version of reality"    
C01103 00250	∂25-Jan-83  1357	GAVAN @ MIT-MC 	correspondence theory   
C01107 00251	∂25-Jan-83  1404	GAVAN @ MIT-MC 	The Objectivity of Mathematics    
C01110 00252	∂25-Jan-83  1512	BATALI @ MIT-MC 	correspondence theory  
C01112 00253	∂25-Jan-83  1553	DAM @ MIT-MC 	The Objectivity of Mathematics 
C01114 00254	∂25-Jan-83  1612	GAVAN @ MIT-MC 	correspondence theory   
C01118 00255	∂25-Jan-83  1622	BATALI @ MIT-MC 	Practical Necessity    
C01122 00256	∂25-Jan-83  1633	John McCarthy <JMC@SU-AI> 	correspondence theory, misunderstanding thereof 
C01124 00257	∂25-Jan-83  1642	GAVAN @ MIT-MC 	correspondence theory, misunderstanding thereof  
C01129 00258	∂25-Jan-83  1645	BATALI @ MIT-MC 	correspondence theory  
C01135 00259	∂25-Jan-83  1655	GAVAN @ MIT-MC 	sentences
C01137 00260	∂25-Jan-83  1703	GAVAN @ MIT-MC 	Practical Necessity
C01142 00261	∂25-Jan-83  1813	John Batali <Batali at MIT-OZ at MIT-MC> 	Objectivity, ad nauseum
C01148 00262	∂25-Jan-83  1842	GAVAN @ MIT-MC 	The Objectivity of Mathematics    
C01150 00263	∂25-Jan-83  1908	GAVAN @ MIT-MC 	correspondence theory   
C01161 00264	∂25-Jan-83  2113	MINSKY @ MIT-MC 	The Objectivity of Mathematics   
C01163 00265	∂25-Jan-83  2134	GAVAN @ MIT-MC 	Objectivity, ad nauseum 
C01173 00266	∂25-Jan-83  2251	JCMa@MIT-OZ 	The Objectivity of Mathematics  
C01176 00267	∂25-Jan-83  2336	JCMa@MIT-OZ at MIT-MC 	Winograd interview in Le Monde (FTPing of)
C01177 00268	∂26-Jan-83  0203	ISAACSON at USC-ISI 
C01187 00269	∂26-Jan-83  1825	←Bob <Carter at RUTGERS> 
C01192 00270	∂26-Jan-83  1840	John McCarthy <JMC@SU-AI> 	intuitionism      
C01197 00271	∂26-Jan-83  1847	ISAACSON at USC-ISI 	Re:  intuitionism  
C01198 00272	∂26-Jan-83  2057	KDF @ MIT-MC 	The Objectivity of Mathematics 
C01203 00273	∂26-Jan-83  2125	GAVAN @ MIT-MC 	The Objectivity of Mathematics    
C01204 00274	∂26-Jan-83  2134	GAVAN @ MIT-MC 
C01208 00275	∂26-Jan-83  2134	MINSKY @ MIT-MC
C01210 00276	∂26-Jan-83  2257	ISAACSON at USC-ISI 	Correction:  Heyting ==> Beth
C01213 00277	∂26-Jan-83  2320	ISAACSON at USC-ISI 	Epistemogenic Stuff
C01217 00278	∂26-Jan-83  2323	ISAACSON at USC-ISI 	Re:  intuitionism  
C01219 00279	∂26-Jan-83  2326	MINSKY @ MIT-MC
C01223 00280	∂27-Jan-83  0151	ISAACSON at USC-ISI 	Epistemogenic Stuff
C01226 00281	∂27-Jan-83  0728	ISAACSON at USC-ISI 	First Peirce - Then the Bible!    
C01234 00282	∂27-Jan-83  1542	DAM @ MIT-MC 	The Objectivity of Mathematics 
C01239 00283	∂27-Jan-83  1550	DAM @ MIT-MC 	intuitionism    
C01241 00284	∂27-Jan-83  1607	DAM @ MIT-MC 	Earlier Work    
C01244 00285	∂27-Jan-83  1616	DAM @ MIT-MC 	Summary    
C01248 00286	∂27-Jan-83  2024	MINSKY @ MIT-MC 	The Objectivity of Mathematics   
C01250 00287	∂27-Jan-83  2028	MINSKY @ MIT-MC 	Summary 
C01252 00288	∂28-Jan-83  0215	GAVAN @ MIT-MC 	Earlier Work  
C01255 00289	∂28-Jan-83  0221	GAVAN @ MIT-MC 	The Objectivity of Mathematics    
C01258 00290	∂28-Jan-83  0836	DAM @ MIT-MC 	Summary    
C01261 00291	∂28-Jan-83  0845	DAM @ MIT-MC 	Sentences  
C01263 00292	∂28-Jan-83  0901	GAVAN @ MIT-MC 	Sentences
C01267 00293	∂28-Jan-83  0905	GAVAN @ MIT-MC 	Sentences
C01269 00294	∂28-Jan-83  1150	ISAACSON at USC-ISI 	Job Numbers   
C01271 00295	∂28-Jan-83  1232	GAVAN @ MIT-MC 
C01274 00296	∂28-Jan-83  1253	GAVAN @ MIT-MC 	First Peirce - Then the Bible!    
C01278 00297	∂28-Jan-83  1427	MINSKY @ MIT-MC
C01281 00298	∂28-Jan-83  1453	MINSKY @ MIT-MC 	Summary 
C01289 00299	∂28-Jan-83  1551	KDF @ MIT-MC 	The Objectivity of Mathematics 
C01292 00300	∂28-Jan-83  1603	DAM @ MIT-MC 	Sentences  
C01297 00301	∂28-Jan-83  1614	DAM @ MIT-MC 	meaning    
C01299 00302	∂28-Jan-83  1634	ISAACSON at USC-ISI 	Welcome to the club (?) 
C01302 00303	∂28-Jan-83  1920	MINSKY @ MIT-MC 	Sentences    
C01306 00304	∂28-Jan-83  1927	ISAACSON at USC-ISI 	Re:  meaning  
C01307 00305	∂28-Jan-83  2022	phil-sci-request at MIT-MC 	Archives On MIT-AI    
C01314 00306	∂28-Jan-83  2340	John McCarthy <JMC@SU-AI> 	sentences    
C01317 00307	∂29-Jan-83  0009	GAVAN @ MIT-MC 	meaning  
C01319 00308	∂29-Jan-83  0023	←Bob <Carter at RUTGERS> 	sentences
C01321 00309	∂29-Jan-83  0151	JCMa@MIT-OZ 	meta-epistemology, philosophy of science, innateness, and learning 
C01327 00310	∂29-Jan-83  0806	DAM @ MIT-MC 	Tarskian Semantics   
C01333 00311	∂29-Jan-83  0809	GAVAN @ MIT-MC 
C01337 00312	∂29-Jan-83  0835	GAVAN @ MIT-MC 	The Objectivity of Mathematics    
C01340 00313	∂29-Jan-83  0845	GAVAN @ MIT-MC 	meaning  
C01342 00314	∂29-Jan-83  0935	MINSKY @ MIT-MC
C01344 00315	∂29-Jan-83  1053	MINSKY @ MIT-MC 	meta-epistemology, philosophy of science, innateness, and learning  
C01347 00316	∂29-Jan-83  1139	GAVAN @ MIT-MC 	meta-epistemology, philosophy of science, innateness, and learning   
C01350 00317	∂29-Jan-83  1156	ISAACSON at USC-ISI 	The Meta-Epistemogen:  Difference Detection 
C01352 00318	∂29-Jan-83  1212	DAM @ MIT-MC 	Sentences  
C01357 00319	∂29-Jan-83  1216	DAM @ MIT-MC 	Definitions of "innate"   
C01361 00320	∂29-Jan-83  1222	DAM @ MIT-MC 	Sentences  
C01363 00321	∂29-Jan-83  1225	JCMa@MIT-OZ at MIT-MC 	meta-epistemology, philosophy of science, innateness, and learning 
C01369 00322	∂29-Jan-83  1232	ISAACSON at USC-ISI 	What is a chair?   
C01373 00323	∂29-Jan-83  1243	GAVAN @ MIT-MC 	CREATION, AUTOPOEISIS [smash epistemogens]:  Difference Detection    
C01375 00324	∂29-Jan-83  1249	JCMa@MIT-OZ at MIT-MC 	POESIS: The Meta-Epistemogen:  Difference Detection 
C01378 00325	∂29-Jan-83  1330	ISAACSON at USC-ISI 	Epistemogen ===>   Poesis    
C01383 00326	∂29-Jan-83  1342	GAVAN @ MIT-MC 	The Meaninglessness of Tarskian Semantics   
C01387 00327	∂29-Jan-83  1411	GAVAN @ MIT-MC 	What is a chair?   
C01391 00328	∂29-Jan-83  1422	GAVAN @ MIT-MC 	Sentences
C01395 00329	∂29-Jan-83  1528	←Bob <Carter at RUTGERS> 	Sentences
C01397 00330	∂29-Jan-83  1604	MINSKY @ MIT-MC 	meta-epistemology, philosophy of science, innateness, and learning  
C01401 00331	∂29-Jan-83  1608	BATALI @ MIT-MC 	Tarskian Coherence
C01405 00332	∂29-Jan-83  1614	MINSKY @ MIT-MC 	Definitions of "innate"
C01408 00333	∂29-Jan-83  1634	BATALI @ MIT-MC 	Kant was a smart fella, honest.  
C01413 00334	∂29-Jan-83  1705	John McCarthy <JMC@SU-AI> 	innateness, sentences, etc.      
C01421 00335	∂29-Jan-83  1809	ISAACSON at USC-ISI 	Re:  What is a chair?   
C01423 00336	∂29-Jan-83  2057	MINSKY @ MIT-MC 	Kant was a smart fella, honest.  
C01428 00337	∂29-Jan-83  2209	KDF @ MIT-MC 	The Objectivity of Mathematics 
C01432 00338	∂29-Jan-83  2319	MINSKY @ MIT-MC 	innateness, sentences, etc.      
C01434 00339	∂30-Jan-83  0817	John Batali <Batali at MIT-OZ> 	Kant: no dummy    
C01441 00340	∂30-Jan-83  1045	GAVAN @ MIT-MC 	meta-epistemology, philosophy of science, innateness, and learning   
C01446 00341	∂30-Jan-83  1105	GAVAN @ MIT-MC 	Sentences
C01448 00342	∂30-Jan-83  1128	DAM @ MIT-MC 	Tarskian Semantics   
C01459 00343	∂30-Jan-83  1220	John C. Mallery <JCMa at MIT-OZ> 	Principle of Charity in Argument and Creole Langauges   
C01464 00344	∂30-Jan-83  1246	John C. Mallery <JCMa at MIT-OZ> 	Chomsky, Fodor, Innateness
C01468 00345	∂30-Jan-83  1246	John McCarthy <JMC@SU-AI>
C01475 00346	∂30-Jan-83  1227	DAM @ MIT-MC 	innateness 
C01477 00347	∂30-Jan-83  1248	John McCarthy <JMC@SU-AI> 	my error     
C01479 00348	∂30-Jan-83  1144	John C. Mallery <JCMa at MIT-OZ> 	meta-epistemology, philosophy of science, innateness, and learning
C01481 00349	∂30-Jan-83  1239	John C. Mallery <JCMa at MIT-OZ> 	innateness, sentences, etc.    
C01485 00350	∂30-Jan-83  1311	DAM @ MIT-MC 	Tarskian Semantics   
C01488 00351	∂30-Jan-83  1405	DAM @ MIT-MC 	innateness, sentences, etc.    
C01495 00352	∂30-Jan-83  1410	DAM @ MIT-MC 	innateness, sentences, etc.    
C01498 00353	∂30-Jan-83  1418	MINSKY @ MIT-MC 	innateness, sentences, etc. 
C01500 00354	∂30-Jan-83  1424	KDF @ MIT-MC 	Innateness of Space and Time   
C01504 00355	∂30-Jan-83  1428	DAM @ MIT-MC 	some mathematical results 
C01506 00356	∂30-Jan-83  1432	MINSKY @ MIT-MC 	meta-epistemology, etc.
C01509 00357	∂30-Jan-83  1441	KDF @ MIT-MC 	meta-epistemology, etc.   
C01513 00358	∂30-Jan-83  1454	GAVAN @ MIT-MC 	scientific respectibility    
C01515 00359	∂30-Jan-83  1459	GAVAN @ MIT-MC 	meta-epistemology, etc. 
C01518 00360	∂30-Jan-83  1537	DAM @ MIT-MC 	a fixed mind    
C01521 00361	∂30-Jan-83  1537	ISAACSON at USC-ISI 	Re:  meta-epistemology, etc. 
C01525 00362	∂30-Jan-83  1555	ISAACSON at USC-ISI 	Re:  meta-epistemology  
C01527 00363	∂30-Jan-83  1632	John C. Mallery <JCMa at MIT-OZ at MIT-MC> 	Tarskian Semantics   
C01531 00364	∂30-Jan-83  1638	John C. Mallery <JCMa at MIT-OZ at MIT-MC> 	Tarskian Semantics   
C01533 00365	∂30-Jan-83  1654	GAVAN @ MIT-MC 	innateness, sentences, etc.  
C01539 00366	∂30-Jan-83  2112	MINSKY @ MIT-MC
C01544 00367	∂30-Jan-83  2130	MINSKY @ MIT-MC 	innateness, sentences, etc. 
C01548 00368	∂30-Jan-83  2156	GAVAN @ MIT-MC 	counter-productive tactics   
C01551 00369	∂30-Jan-83  2205	GAVAN @ MIT-MC 	some mathematical results    
C01553 00370	∂30-Jan-83  2206	John C. Mallery <JCMa at MIT-OZ> 	Hallelujah: Saved from Chomskian Depravity    
C01555 00371	∂30-Jan-83  2249	John McCarthy <JMC@SU-AI> 	innateness        
C01557 00372	∂31-Jan-83  0019	ISAACSON at USC-ISI 	Re:  meta-epistemology, etc. 
C01560 00373	∂31-Jan-83  0019	GAVAN @ MIT-MC 	Kant: no dummy
C01566 00374	∂31-Jan-83  0019	MINSKY @ MIT-MC 	Kant: no dummy    
C01571 00375	∂31-Jan-83  0019	ISAACSON at USC-ISI 	Re:  meta epitemology, etc.  
C01573 00376	∂31-Jan-83  0100	John C. Mallery <JCMa at MIT-OZ> 	innateness, sentences, etc.    
C01576 00377	∂31-Jan-83  0101	John McCarthy <JMC@SU-AI> 	There you go again, Gavan.       
C01577 00378	∂31-Jan-83  0101	MINSKY @ MIT-MC 	innateness, sentences, etc.      
C01582 00379	∂31-Jan-83  0301	GAVAN @ MIT-MC 	There you don't go again, JMC.    
C01585 00380	∂31-Jan-83  0309	GAVAN @ MIT-MC 	meta-epistemology, etc. 
C01587 00381	∂31-Jan-83  0454	ISAACSON at USC-ISI 	Pre-natal meta-epistemology  
C01589 00382	∂31-Jan-83  0819	BATALI @ MIT-MC 	There you don't go again, JMC.   
C01591 00383	∂31-Jan-83  0934	BATALI @ MIT-MC 	Something Changes 
C01595 00384	∂31-Jan-83  1118	ISAACSON at USC-ISI 	Re:  Something changes  
C01598 00385	∂31-Jan-83  1144	DAM @ MIT-MC 	innateness 
C01601 00386	∂31-Jan-83  1333	LEVITT @ MIT-MC 	Languages, tenses 
C01604 00387	∂31-Jan-83  1525	DAM @ MIT-MC 	innateness, sentences
C01610 00388	∂31-Jan-83  1628	MINSKY @ MIT-MC 	innateness, sentences  
C01615 00389	∂31-Jan-83  1712	DAM @ MIT-MC 	innateness, sentences
C01623 00390	∂31-Jan-83  1939	GAVAN @ MIT-MC 	Pre-natal meta-epistemology  
C01626 00391	∂31-Jan-83  1952	John C. Mallery <JCMa at MIT-OZ> 	Re:  meta-epistemology, etc.   
C01628 00392	∂31-Jan-83  2016	GAVAN @ MIT-MC 	Languages, tenses  
C01630 00393	∂31-Jan-83  2041	John McCarthy <JMC@SU-AI> 	narrowness        
C01631 00394	∂31-Jan-83  2046	ISAACSON at USC-ISI 	Re:  meta-epistemology, etc. 
C01633 00395	∂31-Jan-83  2059	JCMa@MIT-OZ 	Putnam on Chomsky, and Innateness    
C01635 00396	∂31-Jan-83  2103	ISAACSON at USC-ISI 	Re:  Pre-natal meta-epistemology  
C01637 00397	∂31-Jan-83  2126	John McCarthy <JMC@SU-AI> 	CORRESPONDENCE, etc. and meta-epistemology again     
C01665 00398	∂31-Jan-83  2145	GAVAN @ MIT-MC 	Determinate Being  
C01667 00399	∂31-Jan-83  2232	MINSKY @ MIT-MC 	innateness, sentences  
C01670 00400	∂31-Jan-83  2242	LEVITT @ MIT-MC 	"primitive" representations of space and time   
C01674 00401	∂31-Jan-83  2326	GAVAN @ MIT-MC 	There you don't go again, JMC.    
C01677 00402	∂31-Jan-83  2354	John McCarthy <JMC@SU-AI> 	criticism of coherence and consensus       
C01680 00403	∂01-Feb-83  0033	GAVAN @ MIT-MC 	criticism of coherence and consensus   
C01686 00404	∂01-Feb-83  0037	JCMa@MIT-OZ 	Languages, tenses
C01690 00405	∂01-Feb-83  0040	JCMa@MIT-OZ 	Re:  meta-epistemology, etc.    
C01692 00406	∂01-Feb-83  0138	JCMa@MIT-OZ at MIT-MC 	Putnam: Correspondence, Tarski, and Truth 
C01699 00407	∂01-Feb-83  0342	LEVITT @ MIT-MC 	Putnam: Correspondence, Tarski, and Truth  
C01703 00408	∂02-Feb-83  1823	BATALI @ MIT-MC 	And on his farm there was a cow  
C01709 00409	∂02-Feb-83  2328	ZVONA @ MIT-MC 
C01711 00410	∂03-Feb-83  0106	MINSKY @ MIT-MC
C01714 00411	∂03-Feb-83  0122	DAM @ MIT-MC 	Meta-epistemology    
C01720 00412	∂03-Feb-83  0126	LEVITT @ MIT-MC 	sequences in space, time, and intra-mental experiments    
C01726 00413	∂03-Feb-83  0341	ISAACSON at USC-ISI 	Sparseness in Stringland     
C01748 00414	∂03-Feb-83  0855	DAM @ MIT-MC 	Semantic Grammars    
C01752 00415	∂03-Feb-83  0920	DAM @ MIT-MC 	existence before essence  
C01756 00416	∂03-Feb-83  0925	MINSKY @ MIT-MC 	And on his farm there was a cow  
C01760 00417	∂03-Feb-83  0931	DAM @ MIT-MC 	Piaget
C01763 00418	∂03-Feb-83  0945	DAM @ MIT-MC 	[MINSKY: innateness, sentences]
C01765 00419	∂03-Feb-83  0955	DAM @ MIT-MC 	Sparseness 
C01768 00420	∂03-Feb-83  2242	MINSKY @ MIT-MC 	innateness, sentences  
C01810 00421	∂03-Feb-83  2241	BATALI @ MIT-MC 	Semantic Grammar  
C01888 ENDMK
C⊗;
∂12-Jan-83  0024	Carl Hewitt <Hewitt at MIT-OZ at MIT-MC> 	Popper  
Date: Wednesday, 12 January 1983, 03:20-EST
From: Carl Hewitt <Hewitt at MIT-OZ at MIT-MC>
Subject: Popper
To: GAVAN at MIT-MC
Cc: BAK at MIT-OZ at MIT-MC, Carl Hewitt <Hewitt at MIT-OZ at MIT-MC>,
    philosophy-of-science at MIT-OZ at MIT-MC, Hewitt at MIT-OZ at MIT-MC
In-reply-to: The message of 11 Jan 83 15:16-EST from GAVAN at MIT-MC

        Received: from MIT-MC.ARPA by MIT-XX.ARPA with TCP; Tue 11 Jan 83 15:18:27-EST
        Date: Tuesday, 11 January 1983  15:16-EST
        Sender: GAVAN @ MIT-OZ
        From: GAVAN @ MIT-MC
        To:   BAK @ MIT-OZ
        Cc:   Carl Hewitt <Hewitt @ MIT-OZ>, philosophy-of-science @ MIT-OZ
        Subject: Popper
        In-reply-to: The message of 10 Jan 1983  20:16-EST from BAK

        But Lakatos
        argues that all theories are equally undisprovable, since the failure
        of any empirical test does not necessarily result in the rejection of the
        theory it sought to test.  Instead, Lakatos shows by example, the theorist
        may simply add a *ceteris paribus* clause to his/her theory which will
        explain away the anamoly, thus salvaging the theory from the jaws of
        falsificationism.  Feyerabend, in turn, uses this as an argument in support
        for his thesis that, "in science, anything goes."

I don't agree that "in sceince, anything goes". In the first place
it's not so easy to add *ceteris paribus* clauses to avoid trouble.  Consider
the problem of adding such clauses to Newtonian Mechanics to avoid the
trouble caused by the observations mentioned by Feynman in his treatment
on Special Relativity.  Exactly WHAT could you add to Newtonian Mechanics?
Secondly adopting such tactics IN PRACTICE doesn't seem to do any good!

∂12-Jan-83  0026	LEVITT @ MIT-MC 	Chaos vs complexity    
Date: Wednesday, 12 January 1983  03:25-EST
Sender: LEVITT @ MIT-OZ
From: LEVITT @ MIT-MC
To:   GAVAN @ MIT-OZ
Cc:   William A. Kornfeld <BAK @ MIT-OZ>, MINSKY @ MIT-OZ,
      phil-sci @ MIT-OZ
Subject: Chaos vs complexity
In-reply-to: The message of 12 Jan 1983  01:24-EST from GAVAN

     This is why scientists have always had to resort to ceteris paribus
    clauses ("all things being equal") in their theories.  The ability of
    a theorist to make this move is, according to Lakatos, what makes all
    theories equally undisprovable.  When faced with unfavorable empirical
    results they can just (a la Winston) add an unless-clause to their
    theory.  Feyerabend argues that recognition of this is tantamount to
    recognizing that science is chaos.  Maybe that's all right.

You mentioned this before.  Do YOU believe this Lakatos/Feyerbrand
thesis?  It seems to be equating chaos and complexity, which I don't
buy.  Don't they believe we compare and improve the utility of our
theories (e.g.  Occam's razor -- 1 unless-cause per 50 cases is more
useful than 50 per 50)?  I don't understand the intuition in what
they're saying.  What am I missing?

∂12-Jan-83  0057	GAVAN @ MIT-MC 	Popper   
Date: Wednesday, 12 January 1983  03:56-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   Carl Hewitt <Hewitt @ MIT-OZ>
Cc:   BAK @ MIT-OZ, philosophy-of-science @ MIT-OZ
Subject: Popper
In-reply-to: The message of 12 Jan 1983 03:20-EST from Carl Hewitt <Hewitt>

    Date: Wednesday, 12 January 1983, 03:20-EST
    From: Carl Hewitt <Hewitt>
    To:   GAVAN
    cc:   BAK, Carl Hewitt <Hewitt>, philosophy-of-science, Hewitt
    Re:   Popper

            Received: from MIT-MC.ARPA by MIT-XX.ARPA with TCP; Tue 11 Jan 83 15:18:27-EST
            Date: Tuesday, 11 January 1983  15:16-EST
            Sender: GAVAN @ MIT-OZ
            From: GAVAN @ MIT-MC
            To:   BAK @ MIT-OZ
            Cc:   Carl Hewitt <Hewitt @ MIT-OZ>, philosophy-of-science @ MIT-OZ
            Subject: Popper
            In-reply-to: The message of 10 Jan 1983  20:16-EST from BAK

            But Lakatos
            argues that all theories are equally undisprovable, since the failure
            of any empirical test does not necessarily result in the rejection of the
            theory it sought to test.  Instead, Lakatos shows by example, the theorist
            may simply add a *ceteris paribus* clause to his/her theory which will
            explain away the anamoly, thus salvaging the theory from the jaws of
            falsificationism.  Feyerabend, in turn, uses this as an argument in support
            for his thesis that, "in science, anything goes."

    I don't agree that "in sceince, anything goes". 

That's fine with me.  It's not my argument, but Feyerabend's.  I just brought
it up because you and AGRE seemed to agree that "chaos won't do" without
taking Feyerabend's argument into consideration.  What if he's right?
If you dismiss anarchy out-of-hand in some published work, expect to hear
a chorus of "WHAT ABOUT FEYERABEND?"

    In the first place it's not so easy to add
    *ceteris paribus* clauses to avoid trouble.  Consider
    the problem of adding such clauses to Newtonian Mechanics to avoid the
    trouble caused by the observations mentioned by Feynman in his treatment
    on Special Relativity.  Exactly WHAT could you add to Newtonian Mechanics?
    Secondly adopting such tactics IN PRACTICE doesn't seem to do any good!

I'm certainly not a physicist so I don't even want to try to answer
these questions directly, but Lakatos (in the paper, not in the
summary) provides examples in the history of physics of the invocation
of *ceteris paribus* clauses to save a theory.

Also, you're likely to have less need for *ceteris paribus* clauses in
physics than in other sciences, due to the nature of the problem
domain.  Coming up with a mental theory is likely to be much more
difficult than a physical one, and in practice you'll need many more
*ceteris paribus* clauses.  The same holds for social theory as well.

Question: Can you ever explain or even understand social phenomena
(such as a scientific community) to the same degree as physical phenomena
can be understood considering that any one member of a society is likely
to be as complex as you are?

∂12-Jan-83  0107	GAVAN @ MIT-MC 	Chaos vs complexity
Date: Wednesday, 12 January 1983  04:06-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   LEVITT @ MIT-OZ
Cc:   William A. Kornfeld <BAK @ MIT-OZ>, MINSKY @ MIT-OZ,
      phil-sci @ MIT-OZ
Subject: Chaos vs complexity
In-reply-to: The message of 12 Jan 1983  03:25-EST from LEVITT

    Date: Wednesday, 12 January 1983  03:25-EST
    From: LEVITT
    Sender: LEVITT
    To:   GAVAN
    cc:   William A. Kornfeld <BAK>, MINSKY, phil-sci
    Re:   Chaos vs complexity

         This is why scientists have always had to resort to ceteris paribus
        clauses ("all things being equal") in their theories.  The ability of
        a theorist to make this move is, according to Lakatos, what makes all
        theories equally undisprovable.  When faced with unfavorable empirical
        results they can just (a la Winston) add an unless-clause to their
        theory.  Feyerabend argues that recognition of this is tantamount to
        recognizing that science is chaos.  Maybe that's all right.

    You mentioned this before.  Do YOU believe this Lakatos/Feyerbrand
    thesis?  

Well, there's two separate theses here.  I agree with Lakatos pretty
much and think that Feyerabend is perhaps a tongue-in-cheek Popperian
critic of Lakatos.  If he is though, he's hiding his real beliefs rather
well.

    It seems to be equating chaos and complexity, which I don't
    buy.  Don't they believe we compare and improve the utility of our
    theories (e.g.  Occam's razor -- 1 unless-cause per 50 cases is more
    useful than 50 per 50)?  

Lakatos explicitly does buy this, with certain reservations.  Note that
if YOU buy into this, you'll drop anything you're doing with cognitive
science and run right off and practice rat psychology with B.F. Skinner.

    I don't understand the intuition in what
    they're saying.  What am I missing?

You're missing the actual texts.  Much undoubtedly gets lost in
translation.

∂11-Jan-83  2345	GAVAN @ MIT-MC 	Scientific Community Metaphor compatible with Society of Mind?  
Date: Wednesday, 12 January 1983  02:08-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   John Batali <Batali @ MIT-OZ>
Cc:   philosophy-of-science @ MIT-OZ
Subject: Scientific Community Metaphor compatible with Society of Mind?
In-reply-to: The message of 11 Jan 1983 20:13-EST from John Batali <Batali>

    Date: Tuesday, 11 January 1983, 20:13-EST
    From: John Batali <Batali>
    To:   philosophy-of-science
    Re:   Scientific Community Metaphor compatible with Society of Mind?

    There are at least two important issues here, and I've only seen
    discussion of the first: 

    	1.  Scientific Communities as gatherers of knowledge;
    	2.  The importance of gathering knowledge for intelligence.

    Much of the discussion about the scientific community, and all of it
    here has been to defend or question the ability of the scientific
    community to learn about the world, quickly, reliably, accurately.
    Implicit in the arguments in support of the scientific community
    metaphor is the assumption that this knowledge-gathering activity is
    somehow foundational for intelligent systems.

    I question that assumption.  It seems to me that the point of an
    intelligent system is to DO things, to perform actions "intelligently."
    KNOWing things will help, but the point of the system is to DO -- not to
    KNOW.  And this is why the scientific community metaphor seems to me of
    only weak utility: the scientific community is set up explicitly only to
    learn about the world, not to do anything about it.  Sure, they do
    things -- experiments -- but those actions are in the service of the
    accumulation of knowledge.

I basically agree that action is an important determinant of
knowledge, but I would balk at the behavioralism implicit here.
There's no reason we can't accept both assumptions, that the
performance of action results in knowledge and that knowledge can
motivate action.  Of course, both knowledge and action are influenced
directly by our ways of referring (this is essentially the Functional
State Identity Thesis -- a mental state is equivalent to a conjunction
of disjunctive states of belief, desire, and reference -- Hilary
Putnam's static formulation).  This is to say that all this knowing
and doing is occuring in a linguistic community, and the character of
the reference system embodied in the language of the community affects
both the knowing and the doing (and the knowing and doing affects the
character of the reference system).

    For an intelligence, this is backwards:  Knowledge must be in the
    service of action.  

Action can also be in the service of knowledge.  Otherwise, why would
I have driven my car halfway across the country to come to MIT?  Maybe
that wasn't something an "intelligence" would have done.  Isn't there
some sort of (gosh) dialectic here?

    And there are many kinds of actions besides those
    that increase our knowledge.  So we should be studying DOING systems as
    our metaphors, rather than KNOWING systems.  

Not "rather" but "in addition to."

    The society of mind, for
    example, is set up as a bunch of agents trying to do things, and
    communicating in various ways about what they are trying to do.

What do they communicate if not something that they "know"?  Do they
not "know" that they are trying to "do" something?

    This approach might also be easier.  Much of the discussion here has
    been about the problems involved with trying to claim that science
    "advances".  This is because "what science is for" is a very abstract
    notion -- something like "knowledge" (whatever that is).  But a behaving
    system is presented with specific, concrete, real-world goals.  We can
    thus compare alternative aproaches in terms of their satisfaction of
    those goals.

But "science" has always been "for" specific, concrete, real-world
goals.  See Jurgen Habermas, *Knowledge and Human Interests*.  Science
serves action.  And the action that science serves is, in a very real
sense, the domination of nature.  Scientific communities are set up
BOTH to know something about the world and to do something to it.

I feel very strongly both ways.  I agree with Batali (to the extent
that his remarks reflect Peircean pragmatism), but I think his
criticism is too strong.  Clearly, the two approaches are not
(necessarily) mutually exclusive.  They're not "alternatives" but
rather complements.
∂12-Jan-83  0011	GAVAN @ MIT-MC 	Scientific-Engineering Community Metaphor compatible with Society of the Mind? 

    Date: Wednesday, 12 January 1983, 01:52-EST
    From: Carl Hewitt <Hewitt>
    To:   GAVAN
    cc:   Carl Hewitt <Hewitt>, AGRE, batali, philosophy-of-science, Hewitt
    Re:   Scientific-Engineering Community Metaphor compatible with Society of the Mind?

Let me respond without yanking the whole discussion back in.

        Normal science can certainly be conducted within a
        revolutionary paradigm, and cognitive science is not internally
        anarchistic.  When Feyerabend says that science is anarchistic, he's
        speaking of science taken as a whole.  Say that on Mars science is
        organized in such a way that Martian scientists are required to show
        that any new theory has greater explanatory power than does the theory
        it seeks to displace.

    This reqirement seems completely unreasonable, irrational, and
    unworkable to me.

Well, I agree, but that's a requirement some (Lakatos, for example)
include in their sets of principles around which scientific
communities organize.  If you can't accept that one (neither can I),
whose will you accept.  If you're going to decide, what are to be your
decision criteria?

        Now if one of these Martian scientists visited
        Earth and discovered that Earth psychology had all these competing
        paradigms which interact very little.  He/She/It would regard the
        state of Earth science as anarchistic and thus unlikely to progress.

    I wold not regard the state of earth science as "anarchistic" but
    rather working along the usual principles of most research communities.

You can apply the term "anarchistic" or not, as you wish.  But it
seems to me that the situation is pretty anarchistic when you have all
these competing paradigms trying to explain the same phenomena without
intercommunicating, you've got anarchy.

        How often do cognitivists and behaviorists have joint conferences?
        How many joint journals do they have.  Who sponsors both enterprises?

    Why should they have joint conferences or journals?  What good do you
    think it would do?  Do you think that it is a workable proposal?

It's certainly not a workable proposal, which is my point.  If they
have the same problem domain and they don't intercommunicate, then the
overall state of science in that problem domain is certainly chaotic.

    Its not clear to me that agents in the Society of the Mind communicate
    using messages in any way which is analogous to communication in
    scientific-engineering communities. Do you see any direct similarities?

It's possible that, when you refer to the communications of agents in
the Society of the Mind, you have in mind somebody's explication of
the metaphor with which I'm not familiar.  But, assuming that you
don't and speaking solely on the level of the competing metaphors, I
ask you what makes you think scientists and engineers are any
different from anyone else.  They're just living, breathing, thinking
human beings, just like everyone else.  Scientists and (you've added)
engineers aren't the only people who hold conferences, have journals,
etc.  Businessmen do too.  Would you add them.  What about members of
the intelligence community?

            I have in mind the principles by which scientific communities 
            ACTUALLY work.  Determining the priniciples by which            
            scientific communities work is itself a scientific question which
            is addressed by a scientific community.

        The problem is that there's no agreement on how scientific communities
        actually work.  Popper, Kuhn, Lakatos, and Feyerabend all draw on
        empirical, historical evidence to support their incommensurable
        theories.

    Why do your think they are incommensurable?  They seem to rationally
    discuss issues and argue with each other a lot.

Kuhn's *Structure of Scientific Revolutions* and Feyerabend's *Against
Method* are DIAMETRICALLY opposed to Popper's *Logic of Scientific
Discovery*. The public arguments are a cover for private wars.  I've
also heard stories (from reliable sources) about nasty mud-slinging
between Popper and Lakatos at the London School of Economics (before
the latter's death).

        Anyway, if you want to use "the principles by which scientific
        communities ACTUALLY work" you'll have to choose somebody's set of
        principles.

    Obviously we will have to identify some principles like Commutavity and
    Sponsorship.  It's not clear that we have to restrict ourselves to one
    source of ideas for principles.

Hopefully, you'll select the right ones.

        Whether you select a society-of-mind metaphor or a
        scientific-community metaphor as your heuristic, you'll have to buy
        into someone's theory.  If you select a society-of-mind metaphor,
        you'll have a much broader range of theories from which to choose than
        if you select a scientific-community metaphor.  Moreover, the former
        will give you more detailed theories than will the latter, because the
        sociology of knowledge is still in its infancy compared (even) to
        sociology.  Anyway, there are highly developed theories of social action
        floating around, and I can give you some citations if you like.

    All of this is not clear to me.  I would appreciate citations to what
    you feel are the most valuable social action papers.

OK.  I'll forward you to the cites I sent to BAK.

∂12-Jan-83  0820	DAM @ MIT-MC 	Occum's razor   
Date: Wednesday, 12 January 1983  11:18-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   Phil-sci @ MIT-OZ
Subject: Occum's razor


	I am perplexed in a manner similar to that expressed by David
Levitt.  Why is it that in all of this discussion of the philosophy of
science I have not heard significant mention of what I have always
taken to be the most plausible theory of how scientific ideas are
chosen, namely Occum's razor.  There is a simple concrete
interpretation of Occum's razor in terms of observations and theory
formation.  Suppose that the set of "observations" can be written as a
set of first order sentences (I don't for a minute believe this but it
is a useful approximation for the purposes of this discussion).  The
idea in Occum's razor is to express these observations in the most
concise form possible.  For example suppose the observations are:
Bird(Fred) Flys(Fred), Bird(George) Flys(George), Bird(Harry),
not(Flys(Harry)) ...  The Theory might be that all birds fly with the
exception of certain cases.  Thus a theory might also be a set of
first order sentences meeting the following two conditions:

1) The theory logically implies the observations

2) The theory is the shortest theory known which statisfies 1)

	The first conditon might seem strange since lots of theories
have exceptions but remember that I will allow the listing of
exceptions in the theory.

	This idea can be improved in many ways.  Likelyhood can be
introduced and one can speak of the information (or entropy) of the
observations relative to a theory which includes likelyhoods.  The
observations can be taken in some ontologically independent (language
independent) way and thus part of the problem is to find an ontology.
An improvement along this later line would account for Kuhn's
revolutions.

	Why is such a scheme not taken seriously by Popper, Lakatos,
or Kuhn?

	David Mc

∂12-Jan-83  0853	DAM @ MIT-MC 	Goals 
Date: Wednesday, 12 January 1983  11:39-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   Batali @ MIT-OZ
cc:   Phil-Sci @ MIT-OZ
Subject: Goals

	Date: Tuesday, 11 January 1983, 20:13-EST
	From: John Batali <Batali>

	...	But a behaving
	system is presented with specific, concrete, real-world goals.  We can
	thus compare alternative aproaches in terms of their satisfaction of
	those goals.

	By a "behaving system" do you mean something like a person?
Tell me, what is your "specific concrete real-world" life goal?  I do
not mean to be flippent and I think you do have a point.  However I
think concrete goals are restricted to "normal engineering" where the
ontology and purposes are fixed and accepted by a cummunity.  When
ontological revolutions occur they alter goals as well as beliefs.  A
person who views his life in terms of money has a different outlook on
life than a person who views his life in terms of relationships.  This
is not just to say that one person can persue money while another
love.  It is rather to say that a person who perceives the world in
terms of a numerical value system views his goals differently from
a person who perceives the world in terms of freindship, trust, and
personal commitment.

	David Mc

∂12-Jan-83  0946	MINSKY @ MIT-MC 	Occum's razor
Date: Wednesday, 12 January 1983  12:48-EST
Sender: MINSKY @ MIT-OZ
From: MINSKY @ MIT-MC
To:   DAM @ MIT-OZ
Cc:   Phil-sci @ MIT-OZ, MINSKY @ MIT-OZ
Subject: Occum's razor
In-reply-to: The message of 12 Jan 1983  11:18-EST from DAM


There does exist a deep and satisfactory theory of Occam's razor.  It
is the line started by Solomonoff on inductive inference, and followed
by Kolmogoroff, and later Chaitin and recently by Leonid Levin.

The theory proposes that data should be explained by the simplest
formula that produces it.  Two complications:

1. Simplicity depends on the resources available to describe.
Solomonoff's deep theory shows the extent to which this becomes
independent, say, of which Turing machine or recursive functions you
start with, as the formulas become more complex.  There are many
different formulas, in general.  Solomonoff argued that you have to
weight or combine them somehow.

2.  You can't isolate things from the rest of your scientific context.
Solomonoff also argued that you lose if you seperate those few facts
from everything else you know about birds, animals, things in general
and other laws of nature you have arrived at.

As I said, the result is a deep and profound theory.  Unfortunately it
appears to be somewhat non-computable, but Solomonoff and, recently,
Levin appear to have made some progress on finding sequences of
computable approximations to it.

That theory is my current paradigm.  At the moment I regard all that
stuff about Occam's razor, Popper, Kuhn, and even Lakatos, as
interesting precursor children who made some simple models.  They play
roles in the ancient history of the subject.  But they are so
technically and psychologically simple-minded that I find the phil-sci
discussions only amusing echoes of the past.  Amusing, but I am
saddened to see them taken seriously here in the post Solomonoff era.

∂12-Jan-83  1048	GAVAN @ MIT-MC 	Occum's razor 
Date: Wednesday, 12 January 1983  13:42-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   DAM @ MIT-OZ
Cc:   Phil-sci @ MIT-OZ
Subject: Occum's razor
In-reply-to: The message of 12 Jan 1983  11:18-EST from DAM

    Date: Wednesday, 12 January 1983  11:18-EST
    From: DAM
    Sender: DAM
    To:   Phil-sci
    Re:   Occum's razor

    	I am perplexed in a manner similar to that expressed by David
    Levitt.  Why is it that in all of this discussion of the philosophy of
    science I have not heard significant mention of what I have always
    taken to be the most plausible theory of how scientific ideas are
    chosen, namely Occum's razor.  There is a simple concrete
    interpretation of Occum's razor in terms of observations and theory
    formation.  Suppose that the set of "observations" can be written as a
    set of first order sentences (I don't for a minute believe this but it
    is a useful approximation for the purposes of this discussion).  The
    idea in Occum's razor is to express these observations in the most
    concise form possible.  For example suppose the observations are:
    Bird(Fred) Flys(Fred), Bird(George) Flys(George), Bird(Harry),
    not(Flys(Harry)) ...  The Theory might be that all birds fly with the
    exception of certain cases.  Thus a theory might also be a set of
    first order sentences meeting the following two conditions:

    1) The theory logically implies the observations

    2) The theory is the shortest theory known which statisfies 1)

    	The first conditon might seem strange since lots of theories
    have exceptions but remember that I will allow the listing of
    exceptions in the theory.

    	This idea can be improved in many ways.  Likelyhood can be
    introduced and one can speak of the information (or entropy) of the
    observations relative to a theory which includes likelyhoods.  The
    observations can be taken in some ontologically independent (language
    independent) way and thus part of the problem is to find an ontology.
    An improvement along this later line would account for Kuhn's
    revolutions.

    	Why is such a scheme not taken seriously by Popper, Lakatos,
    or Kuhn?

    	David Mc


What makes you think it isn't?  As I understand you, you're basically
reformulating Duhem's simplism, which Popper and Lakatos do treat.
Below is a selection (on methodological falsificationism) drawn from
the summary of Lakatos.

				*****

There is an important demarcation between passivist and activist
theories of knowledge.  Passivists hold that true knowledge is
Nature's imprint on a perfectly inert mind: mental activity can only
result in bias and distortion.  Activists hold that we cannot read the
book of nature without mental activity.  Conservative activists hold
that we are born with our basic expectations; with them we turn the
world into `our world' but must then live forever in the prison of our
world.  Revolutionary activists hold that conceptual frameworks can be
developed and also replaced by new and better ones; it is we who
create our prisons and we can also, critically, demolish them.

There are two schools of revolutionary activism, Duhem's simplism and
Popper's methodological falsificationism.  Duhem accepts the
conventionalists' position that no physical theory ever crumbles under
the weights of refutations, but claims that the continual addition of
ceteris paribus clauses will force the theory to lose its original
simplicity.  Once this simplicity is lost, he argues, the theory has
to be replaced.  This reduces falsificationism to subjective taste or
scientific fashion.  Popper set out to find a criterion that is at
once both more objective and more hard-hitting.  His methodological
falsificationism is both conventionalist and falsificationist, but he
differs from conservative conventionalists in holding that the
statements decided by agreement are not spatio-temporally universal
but are spatio-temporally singular.  And he differs from the dogmatic
falsificationist in holding that the truth-value of such statements
cannot be proved by facts but, in some cases, may be decided by
agreement.

The Duhemian conservative conventionalist makes unfalsifiable by fiat
some spatio-temporally universal theories which are distinguished by
their explanatory power, simplicity, or beauty.  The Popperian
methodological falsificationist makes unfalsifiable by fiat some
spatio-temporally singular statements which are distinguished by the
fact that there exists at the time a `relevant technique' such that
`anyone who has learned it' will be able to decide that the statement
is `acceptable.' [Note the ideal speech community implicit in this].
Such a statement may be called an `observational' or `basic' statement
[see the summary treatment of dogmatic falsificationism], but only in
inverted commas.  The very selection of such statements is a matter of
decision, which is not based upon exlusively psychological
considerations.  This decision is then followed by a second kind of
decision concerning the separation of the set of accepted `basic'
statements from the rest.

These two decisions correspond to the two assumptions of dogmatic
falsificationism [see the summary], but with important differences.
Above all, the methodological falsificationist is not a
justificationist.  He has no illusions about `experimental proofs' and
is fully aware of the fallibility of his decisions and the risks he
takes.

Realizing that the experimental techniques of science are fallible, he
nevertheless applies them, not as theories under test but as
unproblematic background knowledge accepted temporarily while testing
another theory.  In this way the methodological falsificationist uses
our most successful theories as extensions of our senses and widens
the range of theories which can be applied in testing far beyond the
dogmatic falsificationist's range of strictly observational theories.
The need for decisions to demarcate the theory under test from
unproblematic background knowledge is a characteristic feature of this
brand of methodological falsificationism.  [page 106]

				*****

So you see, the problem is that Galileo's theory really wasn't simpler
than previous theories.  It was actually more complex, since the
theory of optics was also involved in his demonstration of his
astronomical theory.  Some of his contemporaries refused to accept his
astronomical theory because they couldn't accept his optical theory.
He wasn't observing the heavenly bodies, he was just looking inside a
long cylindar.

DAM: Please don't take this message as an indication that I disagree
with you.  I'm not so sure that you couldn't come up with calculations
for the likelihood of a given theory.  But you'll have to make the
probabilities conditional on the likelihood of the observational
theory with which the empirical evidence is generated.

∂12-Jan-83  1106	DAM @ MIT-MC 	Occum's razor   
Date: Wednesday, 12 January 1983  14:04-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   MINSKY @ MIT-OZ
Cc:   Phil-sci @ MIT-OZ
Subject: Occum's razor
In-reply-to: The message of 12 Jan 1983  12:48-EST from MINSKY


	I am familiar with many of the results you refer to.  However
I feel that these results are based on an impoverished model of
observation and explanation.  The approaches you refer to take
"observation" to be a bit string and "explanation" to be a Turing
machine which generates that bit string.  Such a theory is a good
example of what I call "computational reductionism".  While all
programs are in fact "equivalent" to a Turing machine I will never
UNDERSTAND an FFT program until I have understood Fourier transforms.
Similarly it may be that I will never understand cognition until I
understand some notion of "statement", "inference", and "perceptual
truth".  Kolmogoroff complexity theory is not formulated in this
framework and is therefore not likely (I think) to be a useful
interpretation of Occum's razor.
	In saying that the only proper interpretation of Occum's razor
is Kolmogoroff complexity theory you sound like a behaviourist who
tells me that "cognitive processes" can't be important because they
are not formulated in the language of I/O relations.  While I agree
that the ideas of Popper, Kuhn, and Lakatos are not terribly important
for AI I do think that we must develop a theory of "statement",
"obsevation", "inference", "ontology", and "representation" and
therefore I object to the arrogant demand for computational
reductionism.

	David Mc

∂12-Jan-83  1121	MINSKY @ MIT-MC 	Occum's razor
Date: Wednesday, 12 January 1983  14:13-EST
Sender: MINSKY @ MIT-OZ
From: MINSKY @ MIT-MC
To:   DAM @ MIT-OZ
Cc:   Phil-sci @ MIT-OZ
Subject: Occum's razor
In-reply-to: The message of 12 Jan 1983  14:04-EST from DAM


When you refer to Solomonoff's and Kolmogoroff's theories as
"impoverished model of observation and explanation" I think you are
missing the richness of the theory.  If you follow Solomonoff's
explanations, you see that he considers all sorts of high level ideas
inside the machine - all sorts of fancy recursive, higher-level
explanations with exceptions, etc.  The Solomonoff formulation
discusses such things, though the Kolmogoroff (which I haven't read)
is presumably the dry kind of discussion to which you object.

I can't make any sense of your second paragraph.  Especially,
the reactionary part about how we "must" develop theories about those
old words.  Why musn't we also develop theories about
"angels", "grace", "sin" and those, too?

∂12-Jan-83  1208	HEWITT @ MIT-XX 	Occum's razor
Date: Wednesday, 12 January 1983  14:19-EST
From: HEWITT @ MIT-XX
To:   MINSKY @ MIT-MC
Cc:   DAM @ MIT-OZ, Hewitt @ MIT-XX, Phil-sci @ MIT-OZ
Reply-to:  Hewitt at MIT-XX
Subject: Occum's razor
In-reply-to: The message of 12 Jan 1983  12:48-EST from MINSKY at MIT-MC

I find that the ideas of Solomonoff, etc. to be fascinating
and am happy to hear that they are making progress on computational
methods.

However, I am skeptical their ideas can do the whole
job.  Each research programme has FOCUS and MOMENTUM that guides
theory development.

I don't see how tools for economically describing the past
can account for a reserach programme's thrust and planning for
future growth and development.

∂12-Jan-83  1222	BATALI @ MIT-MC 	Goals   
Date: Wednesday, 12 January 1983  15:15-EST
Sender: BATALI @ MIT-OZ
From: BATALI @ MIT-MC
To:   DAM @ MIT-OZ
Cc:   Phil-Sci @ MIT-OZ
Subject: Goals
In-reply-to: The message of 12 Jan 1983  11:39-EST from DAM

    Date: Wednesday, 12 January 1983  11:39-EST
    From: DAM

    Tell me, what is your "specific concrete real-world" life goal?

I wasn't claiming anything about "life goals" whatever they are.  I
mean specific goals, like fixing the toilet, getting the banannas,
insulting the magistrate, and so on.  What kinds of goals do
scientists have?  Sometimes they are specific sorts of things, in
general, engineering-like problems: calibrate the frobnistan,
reflurbulate the hoffinstophometer, change the lightbulb.  I agree
with Carl that the engineering/science community is a fruitful one to
study, and for the reasons suggested by him and Gavan: here we have a
comminity engaged in the collection of knowledge for the pursuit of
action.  But it is not the (just) scientific community, which is
engaged in the collection of knowledge "for its own sake."

∂12-Jan-83  1241	DAM @ MIT-MC 	Occum's razor   
Date: Wednesday, 12 January 1983  15:36-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   MINSKY @ MIT-OZ
Cc:   Phil-sci @ MIT-OZ
Subject: Occum's razor
In-reply-to: The message of 12 Jan 1983  14:13-EST from MINSKY


	..
	Why musn't we also develop theories about "angels", "grace",
	and "sin"?

	Why do we need theories of internal cognitive state rather
than just talk about the genetic endowment and the stimulus history?
It is a matter of scientific judgement.  However it seems clear that
some higher level notions, propobably non-computational in nature, are
needed to understand cognition.  The notion of "statement",
"entailment", and "empirical truth" are simply my favorite candidates.
The best argument for these notions is the nature of human language
which is composed of statements which are taken to have truth value.
How does a purely computational theorist account for this natural
phenomenon?

	David Mc

∂12-Jan-83  1327	GAVAN @ MIT-MC 	Occum's razor 
Date: Wednesday, 12 January 1983  16:14-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   MINSKY @ MIT-OZ
Cc:   DAM @ MIT-OZ, Phil-sci @ MIT-OZ
Subject: Occum's razor
In-reply-to: The message of 12 Jan 1983  12:48-EST from MINSKY

    Date: Wednesday, 12 January 1983  12:48-EST
    From: MINSKY
    Sender: MINSKY
    To:   DAM
    cc:   Phil-sci, MINSKY
    Re:   Occum's razor

    . . . At the moment I regard all that
    stuff about Occam's razor, Popper, Kuhn, and even Lakatos, as
    interesting precursor children who made some simple models.  They play
    roles in the ancient history of the subject.  But they are so
    technically and psychologically simple-minded that I find the phil-sci
    discussions only amusing echoes of the past.  Amusing, but I am
    saddened to see them taken seriously here in the post Solomonoff era.

From your description of Solomonoff, there doesn't appear to be any insight
that isn't also in Lakatos.  Perhaps there's something more in Solomonoff.
If so, I'd like to know what it is.  It takes more than ad hominem arguments
like "technically and psychologically simple-minded" to convince me.

∂12-Jan-83  1612	GAVAN @ MIT-MC 	Occum's razor 
Date: Wednesday, 12 January 1983  16:18-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   MINSKY @ MIT-OZ
Cc:   DAM @ MIT-OZ, Phil-sci @ MIT-OZ
Subject: Occum's razor
In-reply-to: The message of 12 Jan 1983  14:13-EST from MINSKY

    Date: Wednesday, 12 January 1983  14:13-EST
    From: MINSKY
    Sender: MINSKY
    To:   DAM
    cc:   Phil-sci
    Re:   Occum's razor

    . . . 

    I can't make any sense of your second paragraph.  Especially,
    the reactionary part about how we "must" develop theories about those
    old words.  Why musn't we also develop theories about
    "angels", "grace", "sin" and those, too?

A rat-psychologist might ask why we need to posit the existence of
mental agents.

∂12-Jan-83  1652	LEVITT @ MIT-MC 	Chaos vs complexity    
Date: Wednesday, 12 January 1983  19:48-EST
Sender: LEVITT @ MIT-OZ
From: LEVITT @ MIT-MC
To:   GAVAN @ MIT-OZ
Cc:   phil-sci @ MIT-OZ
Subject: Chaos vs complexity
In-reply-to: The message of 12 Jan 1983  04:06-EST from GAVAN

    Date: Wednesday, 12 January 1983  04:06-EST
    From: GAVAN

        Date: Wednesday, 12 January 1983  03:25-EST
        From: LEVITT
        It seems to be equating chaos and complexity, which I don't
        buy.  Don't they believe we compare and improve the utility of our
        theories (e.g.  Occam's razor -- 1 unless-cause per 50 cases is more
        useful than 50 per 50)?  

    Lakatos explicitly does buy this, with certain reservations.  Note that
    if YOU buy into this, you'll drop anything you're doing with cognitive
    science and run right off and practice rat psychology with B.F. Skinner.

This might be the awful truth for some cognitive scientists but, as
BATALI pointed out, making systems that DO things is a fine way to
grow toward increasingly useful theories.

∂12-Jan-83  1907	MINSKY at MIT-OZ at MIT-MC 	OCcam's razor    
Date: 12 Jan 1983 2203-EST
From: MINSKY at MIT-OZ at MIT-MC
Subject: OCcam's razor
To: DAM at MIT-OZ at MIT-MC, PHIL-SCI at MIT-OZ at MIT-MC,
    MINSKY at MIT-OZ at MIT-MC

	From: DAM
	Why do we need theories of internal cognitive state rather
	than just talk about the genetic endowment and the stimulus
	history?  It is a matter of scientific judgement.  However it seems
	clear that some higher level notions, propobably non-computational in
	nature, are needed to understand cognition.  The notion of
	"statement", "entailment", and "empirical truth" are simply my
	favorite candidates.

You keep missing the point.  I am not making a general philosophical
statement that NO "higher level" notions are needed.  I am simply
assaulting your personal choices about exactly those candidates.  I
keep saying that I think the reason cognitive and philosophical
theories have proceeded so slowly is BECAUSE those unfortunate ideas
got so embedded in our language and thought.

I won't discuss this any more in this forum, except to remark that in
my view this rejection of the great value of "entailment" and the like
is the reason why (in my view) I am making more progress in cognitive
theory than everyone else.  Of course, anyone can disagree with that
assessment.
-------

∂12-Jan-83  1918	BATALI @ MIT-MC 	rat psychology    
Date: Wednesday, 12 January 1983  20:55-EST
Sender: BATALI @ MIT-OZ
From: BATALI @ MIT-MC
to:   phil-sci @ MIT-OZ
subject: rat psychology

I think that it is important to remember what this discussion is
supposedly about: the relevence of the philosophy of science to ai.
Now, as is my habit, let me point out that there are two ways to
construe this:

	1.  As a methodological discussion: what can the philosophy of
science tell us about how to do ai?

	2.  As a discussion of potentially powerful methods useful in
ai.  could (should) we be writing learning programs that embody occam
or popper or lakatos?

I personally think that the answer to the first question is "very
little."  The philosophy of science seems to trail behind the
successes of science and then account for them.

The second question is more interesting.  Another plus for pursuing
it: we need not limit ourselves to theories of scientific progress
that are currently accepted.  This because of the differences between
the "goals" of science and those of a behaving agent.  A bogus theory
of the philosophy of science might be the basis of a very useful
theory of perception, for example.

In fact, it seems to me that many criticisms of various philosophies
of science turn on the problems associated with truth and science as
the seeker thereof.  While I believe that that is what science is, I
think that perception and learning are not as constrained.

∂12-Jan-83  1919	MINSKY @ MIT-MC 	Occum's razor
Date: Wednesday, 12 January 1983  22:12-EST
Sender: MINSKY @ MIT-OZ
From: MINSKY @ MIT-MC
To:   GAVAN @ MIT-OZ, Hewitt @ MIT-OZ, Minsky @ MIT-OZ
Cc:   DAM @ MIT-OZ, Phil-sci @ MIT-OZ
Subject: Occum's razor
In-reply-to: The message of 12 Jan 1983  16:14-EST from GAVAN


Oops.  The difference between Solomonoff and Lakatos is that
Solomonoff's is a sort of precise, specific theory about an "optimal"
theory of induction, whereas Lakatos is a sort of critic of common
sense induction ideas.  There is little resemblance.  When I seemed to
dismiss the philosophers as "precursors" of Solomonoff, I didn't mean
that their informal theories are not important, but that I felt that
with the arrival of a technical theory that (in my view) meets many of
the difficulties that they struggled with, the philosophy of induction
enters a new period.

That is, I feel that the Solomonoff-Kolmogoroff concept is what Occam,
Popper, et al may have been searching for - and so, now, we need a new
set of Occams and Poppers to re-do the philosophy.  That is why I feel
that the old ones are outmoded.

∂12-Jan-83  2107	GAVAN @ MIT-MC 	Chaos vs complexity
Date: Thursday, 13 January 1983  00:01-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   LEVITT @ MIT-OZ
Cc:   phil-sci @ MIT-OZ
Subject: Chaos vs complexity
In-reply-to: The message of 12 Jan 1983  19:48-EST from LEVITT

    Date: Wednesday, 12 January 1983  19:48-EST
    From: LEVITT
    Sender: LEVITT
    To:   GAVAN
    cc:   phil-sci
    Re:   Chaos vs complexity

        Date: Wednesday, 12 January 1983  04:06-EST
        From: GAVAN

            Date: Wednesday, 12 January 1983  03:25-EST
            From: LEVITT
            It seems to be equating chaos and complexity, which I don't
            buy.  Don't they believe we compare and improve the utility of our
            theories (e.g.  Occam's razor -- 1 unless-cause per 50 cases is more
            useful than 50 per 50)?  

        Lakatos explicitly does buy this, with certain reservations.  Note that
        if YOU buy into this, you'll drop anything you're doing with cognitive
        science and run right off and practice rat psychology with B.F. Skinner.

    This might be the awful truth for some cognitive scientists but, as
    BATALI pointed out, making systems that DO things is a fine way to
    grow toward increasingly useful theories.

Well, as I said earlier, I generally agree with Batali (with certain
reservations).  But insofar as you're talking about Occam's razor (or
Duhem's simplism) you're not talking about writing programs or growing
increasingly "useful" (there's a lot of meaning packed into that word)
theories, but rather about assessing competing theories.  Haven't you
changed the subject here, or are you suggesting that Occam's razor be
applied to computer programs?

∂12-Jan-83  2124	Carl Hewitt <Hewitt at MIT-OZ at MIT-MC> 	reformulation
Date: Thursday, 13 January 1983, 00:17-EST
From: Carl Hewitt <Hewitt at MIT-OZ at MIT-MC>
Subject: reformulation
To: MINSKY at MIT-MC
Cc: William A. Kornfeld <BAK at MIT-OZ at MIT-MC>,
    phil-sci at MIT-OZ at MIT-MC, Hewitt at MIT-OZ at MIT-MC
In-reply-to: The message of 12 Jan 83 00:11-EST from MINSKY at MIT-MC

    Date: Wednesday, 12 January 1983  00:11-EST
    From: MINSKY at MIT-MC
    To:   William A. Kornfeld <BAK at MIT-OZ>
    Re:   Popper, again.

    ...

    I find that falsification stuff sensible but not central.
    It doesn't touch enough on PLAUSIBILITY.  If we assume that the goal
    isn't finding universals, but adjusting the conditional range of the
    conditions under which we find it sensible to use propositions, then
    finding unexpected whit swans do provide commonsense information.

    Perhaps the goal of Science should not be Truth (e.g., universal
    propositions) but discovering ranges of application and conditions.
    That is, not to confirm or refute - but to reformulate satisfactorily.

I agree with Marvin.  Scientific communities expend an enormous amount
of effort on reformulation.  Is there any good literature on this
subject?

∂12-Jan-83  2148	GAVAN @ MIT-MC 	rat psychology
Date: Thursday, 13 January 1983  00:42-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   BATALI @ MIT-OZ
Cc:   phil-sci @ MIT-OZ
Subject: rat psychology
In-reply-to: The message of 12 Jan 1983  20:55-EST from BATALI

    Date: Wednesday, 12 January 1983  20:55-EST
    From: BATALI
    Sender: BATALI
    To:   phil-sci
    Re:   rat psychology

    I think that it is important to remember what this discussion is
    supposedly about: the relevence of the philosophy of science to ai.
    Now, as is my habit, let me point out that there are two ways to
    construe this:

    	1.  As a methodological discussion: what can the philosophy of
    science tell us about how to do ai?

    	2.  As a discussion of potentially powerful methods useful in
    ai.  could (should) we be writing learning programs that embody occam
    or popper or lakatos?

    I personally think that the answer to the first question is "very
    little."  The philosophy of science seems to trail behind the
    successes of science and then account for them.

This last statement is generally correct, in my estimation, especially
if we limit ourselves to discussing Anglo-Saxon philosophers of
science.  Yet I wonder . . . if a major component (perhaps THE major
component) of the philosophy of science is epistemology, the theory of
knowledge, perhaps there is indeed a special relevance for the
philosophy of science in this "science" (if science it be).

    The second question is more interesting.  Another plus for pursuing
    it: we need not limit ourselves to theories of scientific progress
    that are currently accepted.  This because of the differences between
    the "goals" of science and those of a behaving agent.  A bogus theory
    of the philosophy of science might be the basis of a very useful
    theory of perception, for example.

    In fact, it seems to me that many criticisms of various philosophies
    of science turn on the problems associated with truth and science as
    the seeker thereof.  

I'm tempted to ask just what TRUTH is and what makes you think there's
any such thing, but that's a little off the subject.  Maybe not.  To
me, there is no truth but only consensus.  What we call "truth" is
only what we have agreed upon, given certain conventions which we agree
are "rational."  It seems to me that the notion of coming to a consensus
brings us back to the problem which motivated the discussion.  How do we
in society and mental agents in a society-of-mind consensually validate
our beliefs and theories?

    While I believe that that is what science is, I
    think that perception and learning are not as constrained.

Perception and learning are certainly integrally related to the
philosophy of science.  In fact, one could even say that the
philosophy of science is itself the philosophy of perception and
learning.  Since they're really inseparable I can't see how one can be
less constrained than the other.  I agree, though, that one could
simply bracket everything that falls under the artificial heading
"philosophy of science" and study only what one considers in the
domains of "perception and learning."  But is this really a rational
strategy?  Where do you demarcate the border between the two?

∂12-Jan-83  2153	Carl Hewitt <Hewitt at MIT-OZ at MIT-MC> 	Scientific-Engineering Community Metaphor compatible with Society of the Mind?
Date: Thursday, 13 January 1983, 00:50-EST
From: Carl Hewitt <Hewitt at MIT-OZ at MIT-MC>
Subject: Scientific-Engineering Community Metaphor compatible with Society of the Mind?
To: GAVAN at MIT-MC
Cc: Carl Hewitt <Hewitt at MIT-OZ at MIT-MC>, AGRE at MIT-OZ at MIT-MC,
    batali at MIT-OZ at MIT-MC, philosophy-of-science at MIT-OZ at MIT-MC,
    Hewitt at MIT-OZ at MIT-MC
In-reply-to: The message of 12 Jan 83 03:02-EST from GAVAN at MIT-MC

    Date: Wednesday, 12 January 1983  03:02-EST
    From: GAVAN at MIT-MC
    Re:   Scientific-Engineering Community Metaphor compatible with Society of the Mind?

    ...
    But it seems to me that the situation is pretty anarchistic when you have all
    these competing paradigms trying to explain the same phenomena without
    intercommunicating, you've got anarchy.

The various research programmes of Cognitive Science (behaviorism, complex
information processing, etc.) do communicate with each and in LARGER
ARENAS as well.  You seem to think the situation is "anarchistic"
because there is not more communication going on.  What exactly is this
extra communication that is missing?

            How often do cognitivists and behaviorists have joint conferences?
            How many joint journals do they have.  Who sponsors both enterprises?

        Why should they have joint conferences or journals?  What good do you
        think it would do?  Do you think that it is a workable proposal?

    It's certainly not a workable proposal, which is my point.  If they
    have the same problem domain and they don't intercommunicate, then the
    overall state of science in that problem domain is certainly chaotic.

Exactly what is the lack of communication that makes it "chaotic"?  Who
in Cognitive Science should be talking to whom?

        Its not clear to me that agents in the Society of the Mind communicate
        using messages in any way which is analogous to communication in
        scientific-engineering communities. Do you see any direct similarities?

    It's possible that, when you refer to the communications of agents in
    the Society of the Mind, you have in mind somebody's explication of
    the metaphor with which I'm not familiar.

In the local lingo the term "Society of the Mind" refers to a theory by
Minsky and Papert.  Marvin has written some nice papers about it in
recent years which are available as AI Memos.

                I have in mind the principles by which scientific communities 
                ACTUALLY work.  Determining the priniciples by which            
                scientific communities work is itself a scientific question which
                is addressed by a scientific community.

            The problem is that there's no agreement on how scientific communities
            actually work.  Popper, Kuhn, Lakatos, and Feyerabend all draw on
            empirical, historical evidence to support their incommensurable
            theories.

        Why do your think they are incommensurable?  They seem to rationally
        discuss issues and argue with each other a lot.

    Kuhn's *Structure of Scientific Revolutions* and Feyerabend's *Against
    Method* are DIAMETRICALLY opposed to Popper's *Logic of Scientific
    Discovery*. The public arguments are a cover for private wars.  I've
    also heard stories (from reliable sources) about nasty mud-slinging
    between Popper and Lakatos at the London School of Economics (before
    the latter's death).

Backbiting, personal animosity, attempts at cheating, etc. have always
been a part of the scientific process.  Science/engineering communities have
developed effective methods dealing with these phenomena so that the
communities function effectively in spite of the problems they cause.

            Anyway, if you want to use "the principles by which scientific
            communities ACTUALLY work" you'll have to choose somebody's set of
            principles.

        Obviously we will have to identify some principles like Commutavity and
        Sponsorship.  It's not clear that we have to restrict ourselves to one
        source of ideas for principles.

    Hopefully, you'll select the right ones.

The ones we select will be subject to and grow out of a process scientific
debate, scrutiny, and reformulation--like the one we are engaged in
RIGHT NOW on this mailing list.  Perhaps we differ in that I have faith
in this process whereas you do not.

∂12-Jan-83  2224	KDF @ MIT-MC 	reformulation   
Date: Thursday, 13 January 1983  01:11-EST
Sender: KDF @ MIT-OZ
From: KDF @ MIT-MC
To:   Carl Hewitt <Hewitt @ MIT-OZ>
Cc:   William A. Kornfeld <BAK @ MIT-OZ>, MINSKY @ MIT-OZ,
      phil-sci @ MIT-OZ
Subject: reformulation
In-reply-to: The message of 13 Jan 1983 00:17-EST from Carl Hewitt <Hewitt>

	Presumably one must test reformulations, which involves
looking for falsifications.  Looks like a dumbbell theory to me.

∂12-Jan-83  2234	KDF @ MIT-MC 	Confounding
Date: Thursday, 13 January 1983  01:23-EST
Sender: KDF @ MIT-OZ
From: KDF @ MIT-MC
To:   GAVAN @ MIT-OZ
Cc:   BATALI @ MIT-OZ, phil-sci @ MIT-OZ
Subject: Confounding
In-reply-to: The message of 13 Jan 1983  00:42-EST from GAVAN


    brings us back to the problem which motivated the discussion.  How do we
    in society and mental agents in a society-of-mind consensually validate
    our beliefs and theories?

It is far from clear that any kind of consensus about beliefs is
needed in the society-of-mind, and if it is, that it would be anything
like the mechanisms for human societies.  Except for a few related
agents, the beliefs/goals/theories/whatever of an agent are not ABOUT
the same things as the others - if they were, we are left with little
homonuculli!  The bug is much like the person who, upon hearing the
Kinetic theory of gasses, said it makes sense because when he moves
faster, he gets hot too....

∂12-Jan-83  2249	Carl Hewitt <Hewitt at MIT-OZ> 	semantics for reasoning
Date: Thursday, 13 January 1983, 01:36-EST
From: Carl Hewitt <Hewitt at MIT-OZ>
Subject: semantics for reasoning
To: MINSKY at MIT-OZ
Cc: DAM at MIT-OZ, PHIL-SCI at MIT-OZ, Hewitt at MIT-OZ
In-reply-to: The message of 12 Jan 83 22:03-EST from MINSKY at MIT-OZ at MIT-MC

    Received: from MIT-MC.ARPA by MIT-XX.ARPA with TCP; Wed 12 Jan 83 22:06:36-EST
    Date: 12 Jan 1983 2203-EST
    From: MINSKY at MIT-OZ at MIT-MC
    Subject: OCcam's razor
    To: DAM at MIT-OZ at MIT-MC, PHIL-SCI at MIT-OZ at MIT-MC,
        MINSKY at MIT-OZ at MIT-MC

            From: DAM
            Why do we need theories of internal cognitive state rather
            than just talk about the genetic endowment and the stimulus
            history?  It is a matter of scientific judgement.  However it seems
            clear that some higher level notions, propobably non-computational in
            nature, are needed to understand cognition.  The notion of
            "statement", "entailment", and "empirical truth" are simply my
            favorite candidates.

    You keep missing the point.  I am not making a general philosophical
    statement that NO "higher level" notions are needed.  I am simply
    assaulting your personal choices about exactly those candidates.  I
    keep saying that I think the reason cognitive and philosophical
    theories have proceeded so slowly is BECAUSE those unfortunate ideas
    got so embedded in our language and thought.

    I won't discuss this any more in this forum, except to remark that in
    my view this rejection of the great value of "entailment" and the like
    is the reason why (in my view) I am making more progress in cognitive
    theory than everyone else.  Of course, anyone can disagree with that
    assessment.
    -------

I agree with Marvin in being skeptical of proposals to ground reasoning
on the notion of logical entailment as formalized in the truth-theoretic
semantics of Tarski.  However, I am not sure whether or not my
proposals of how to proceed are compatible with Marvin's or not.  For some
years now a group of us have been proceeding with a research program to
develop "Message Passing Semantics".

Message Passing Semantics takes a different perspective on the meaning
of a sentence from that of truth-theoretic semantics.
In truth-theoretic semantics, the meaning of a sentence
is determined by the models which make it true.
For example the conjunction of two sentences is true exactly when both of its
conjuncts are true.  In contrast Message Passing Semantics takes the
meaning of a message to be the effect it has on the subsequent behavior
of the system. In other words the meaning of a message is determined by
how it affects the recipients.  Each partial meaning of a message
is constructed by a recipient in terms of how it is processed. 

At a deep level, understanding always involves 
categorization, which is a function of interactional (rather than
inherent) properties and the perspective of individual viewpoints.
Message Passing Semantics differs radically from truth-theoretic semantics
which assumes that it is possible to give an account of truth in
itself, free of interactional issues, and that the theory of
meaning will be based on such a theory of truth.

Developing a useful mathematical semantics for reasoning that
is not based on logical entailment is difficult work that is still in
its infancy.  Suggestions as to how we should proceed are most welcome.

I would also welcome citations for good work which is philosophically
compatible with our research programme.  My hope is to find good work of
which I am not cognizant. 

∂12-Jan-83  2324	GAVAN @ MIT-MC 	Scientific-Engineering Community Metaphor compatible with Society of the Mind? 
Date: Thursday, 13 January 1983  02:15-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   Carl Hewitt <Hewitt @ MIT-OZ>
Cc:   AGRE @ MIT-OZ, batali @ MIT-OZ, philosophy-of-science @ MIT-OZ
Subject: Scientific-Engineering Community Metaphor compatible with Society of the Mind?
In-reply-to: The message of 13 Jan 1983 00:50-EST from Carl Hewitt <Hewitt>

    Date: Thursday, 13 January 1983, 00:50-EST
    From: Carl Hewitt <Hewitt>
    To:   GAVAN
    cc:   Carl Hewitt <Hewitt>, AGRE, batali, philosophy-of-science, Hewitt
    Re:   Scientific-Engineering Community Metaphor compatible with Society of the Mind?

        Date: Wednesday, 12 January 1983  03:02-EST
        From: GAVAN at MIT-MC
        Re:   Scientific-Engineering Community Metaphor compatible with Society of the Mind?

        ...
        But it seems to me that the situation is pretty anarchistic when you have all
        these competing paradigms trying to explain the same phenomena without
        intercommunicating, you've got anarchy.

    The various research programmes of Cognitive Science (behaviorism, complex
    information processing, etc.) do communicate with each and in LARGER
    ARENAS as well.  You seem to think the situation is "anarchistic"
    because there is not more communication going on.  What exactly is this
    extra communication that is missing?

You must remember that I'm not arguing in favor of Feyerabend's
position.  Why do you continually try to get me to defend it?  I'm
just trying to give you an example of what he might be talking about.
If you really want to know what his position is, you should read the
text.  Anyway, you might be able to clear something up for me.  Where
is the communicative effort required to effect a synthesis between
behaviorism, cognitive science, and whatever other paradigms there
might be in psychology?

                How often do cognitivists and behaviorists have joint conferences?
                How many joint journals do they have.  Who sponsors both enterprises?

            Why should they have joint conferences or journals?  What good do you
            think it would do?  Do you think that it is a workable proposal?

        It's certainly not a workable proposal, which is my point.  If they
        have the same problem domain and they don't intercommunicate, then the
        overall state of science in that problem domain is certainly chaotic.

    Exactly what is the lack of communication that makes it "chaotic"?  Who
    in Cognitive Science should be talking to whom?

You misunderstand me.  The lack of communication is in psychology in
general, not in cognitive science in particular.  There's a great
amount of normal science going on within both behaviorism and
cognitive science, yet the cross-fertilization between the two is
minimal.  Some of the more dogmatic members of both camps probably see
nothing wrong with this, but, as I implied in a recent response to
Batali on this list, the two approaches are by no means mutually
exclusive.  But where is the cross-fertilization?  The problem domain
of both paradigms is, it seems to me, explaining human nature (or some
such), yet there's little or no effort to discuss cognitive hypotheses
and results within the behavioral school and behavioral hypotheses and
results within the cognitive school.  Don't forget Freudians and the
Gestaltists.  Is this not anarchy?  I could draw other examples from
other disciplines (eg, liberals and marxists in political science and
economics), but why should I?  This whole thing started when I
informed you of Feyerabend's position.  I have no great desire to
defend his thesis.  I have only sought to elaborate on it.  Please
desist from ascribing beliefs to me when I am not stating my own
position.  Personally, I lean closer to Lakatos than to Feyerabend,
which should have been evident in my other messages.

            Its not clear to me that agents in the Society of the Mind communicate
            using messages in any way which is analogous to communication in
            scientific-engineering communities. Do you see any direct similarities?

        It's possible that, when you refer to the communications of agents in
        the Society of the Mind, you have in mind somebody's explication of
        the metaphor with which I'm not familiar.

    In the local lingo the term "Society of the Mind" refers to a theory by
    Minsky and Papert.  Marvin has written some nice papers about it in
    recent years which are available as AI Memos.

Yes, I know (please don't condescend).  But metaphors mean different
things to different people.  That's part of what makes them so
powerful.  What I meant was that you might have been writing at the
level of the metaphor or at the level of the explication.  It was
ambiguous to me.  I thought JCMA cleared that up with you when I was
speaking with him while you received the message (maybe I called the
wrong method).

                    I have in mind the principles by which scientific communities 
                    ACTUALLY work.  Determining the priniciples by which            
                    scientific communities work is itself a scientific question which
                    is addressed by a scientific community.

                The problem is that there's no agreement on how scientific communities
                actually work.  Popper, Kuhn, Lakatos, and Feyerabend all draw on
                empirical, historical evidence to support their incommensurable
                theories.

            Why do your think they are incommensurable?  They seem to rationally
            discuss issues and argue with each other a lot.

        Kuhn's *Structure of Scientific Revolutions* and Feyerabend's *Against
        Method* are DIAMETRICALLY opposed to Popper's *Logic of Scientific
        Discovery*. The public arguments are a cover for private wars.  I've
        also heard stories (from reliable sources) about nasty mud-slinging
        between Popper and Lakatos at the London School of Economics (before
        the latter's death).

    Backbiting, personal animosity, attempts at cheating, etc. have always
    been a part of the scientific process.  Science/engineering communities have
    developed effective methods dealing with these phenomena so that the
    communities function effectively in spite of the problems they cause.

I agree, but in what sense are "backbiting, personal animosity,
attempts at cheating, etc.", rational?  Will your agents be capable of
these sorts of performances?

                Anyway, if you want to use "the principles by which scientific
                communities ACTUALLY work" you'll have to choose somebody's set of
                principles.

            Obviously we will have to identify some principles like Commutavity and
            Sponsorship.  It's not clear that we have to restrict ourselves to one
            source of ideas for principles.

        Hopefully, you'll select the right ones.

    The ones we select will be subject to and grow out of a process scientific
    debate, scrutiny, and reformulation--like the one we are engaged in
    RIGHT NOW on this mailing list.  Perhaps we differ in that I have faith
    in this process whereas you do not.

No.  I don't think I lack faith in this process.  FEYERABEND DOES, BUT
I'M NOT HE!  If I did lack faith in this process, why would I bother
discussing it with you?  This is the substance of Hilary Putnam's
critique of Feyerabend in *Reason, Truth, and History* -- if
Feyerabend truly believes his anarchist thesis then he wouldn't bother
defending it.  Anarchism is thus self-refuting.  This is why I've said
that Feyerabend may actually be engaged in a massive tongue-in-cheek,
neo-Popperian critique of Lakatos.

I think the process of "coming-to-consensus" is precisely what we need
to talk about.  Can we come to some sort of consensus about how we
come to consensus?  Or should we first come to a consensus on whether
there really is something fundamentally better about the way that
scientists do it?  

∂12-Jan-83  2333	GAVAN @ MIT-MC 	Confounding   
Date: Thursday, 13 January 1983  02:26-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   KDF @ MIT-OZ
Cc:   BATALI @ MIT-OZ, phil-sci @ MIT-OZ
Subject: Confounding
In-reply-to: The message of 13 Jan 1983  01:23-EST from KDF

    Date: Thursday, 13 January 1983  01:23-EST
    From: KDF
    Sender: KDF
    To:   GAVAN
    cc:   BATALI, phil-sci
    Re:   Confounding

        brings us back to the problem which motivated the discussion.  How do we
        in society and mental agents in a society-of-mind consensually validate
        our beliefs and theories?

    It is far from clear that any kind of consensus about beliefs is
    needed in the society-of-mind, and if it is, that it would be anything
    like the mechanisms for human societies.  Except for a few related
    agents, the beliefs/goals/theories/whatever of an agent are not ABOUT
    the same things as the others - if they were, we are left with little
    homonuculli!  The bug is much like the person who, upon hearing the
    Kinetic theory of gasses, said it makes sense because when he moves
    faster, he gets hot too....

Yes, I know there's a massive jump in levels here, but so what?  This is
all on the metaphorical level (so far) anyway.  What's the point in choking
off the discussion?  Also, by a consensus I could mean a consensus between
any two (or more) agents.  I'm not positing the necessity of universal
consensus (I think you think I do).  In the "real world" small groups of
people (even as small as two) must sometimes try to come to a consensus.

∂12-Jan-83  2356	GAVAN @ MIT-MC 	reformulation 
Date: Thursday, 13 January 1983  02:40-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   KDF @ MIT-OZ
Cc:   William A. Kornfeld <BAK @ MIT-OZ>, Carl Hewitt <Hewitt @ MIT-OZ>,
      MINSKY @ MIT-OZ, phil-sci @ MIT-OZ
Subject: reformulation
In-reply-to: The message of 13 Jan 1983  01:11-EST from KDF

    Date: Thursday, 13 January 1983  01:11-EST
    From: KDF
    Sender: KDF
    To:   Carl Hewitt <Hewitt>
    cc:   William A. Kornfeld <BAK>, MINSKY, phil-sci
    Re:   reformulation

    	Presumably one must test reformulations, which involves
    looking for falsifications.  Looks like a dumbbell theory to me.

What do you think of Lakatos' argument that all theories are equally
undisprovable?

∂13-Jan-83  0042	JCMa@MIT-OZ at MIT-MC 	Popper, again.   
Date: Thursday, 13 January 1983, 03:39-EST
From: JCMa@MIT-OZ at MIT-MC
Subject: Popper, again.
To: BAK@MIT-OZ at MIT-MC
Cc: phil-sci@MIT-OZ at MIT-MC
In-reply-to: The message of 11 Jan 83 23:01-EST from William A. Kornfeld <BAK at MIT-OZ>

    Date: Tuesday, 11 January 1983, 23:01-EST
    From: William A. Kornfeld <BAK at MIT-OZ>
    Subject: Popper, again.

    A one-sentence definition of falsificationism is: "A theory is accepted
    because we have failed to disprove it."  

This specification lacks a critical condition: that it is known some
"critical experiment" can be performed which will falsify the theory.
Survival of the ordeal then vindicates the theory.  The question of
proof (dis-proof) is then simply pushed off to establishing what is in
the set "critical experiments."  Determination of the contents of this
set requires hypothesis formation and validation.  Looks like
falsificationalism is logically incoherent!

∂13-Jan-83  0057	JCMa@MIT-OZ at MIT-MC 	Max Planck's view
Date: Thursday, 13 January 1983, 03:55-EST
From: JCMa@MIT-OZ at MIT-MC
Subject: Max Planck's view
To: Phil-sci@MIT-OZ at MIT-MC

"A new scientific truth does not triumph by convincing its opponents and
making them see the light, but rather because its opponents eventually
die, and a new generation grows up that is familiar with it."

[From: Max Planck, Scientifiv Autobiography and Other Papers, (New York:
Philosophical Library, 1949), pp. 33-34.]

How do Society of Mind Theories or Scientific Community models model
this?  Or, do they handle it?  How do they know which "ideational" group
is which?

Hint: Recession and Emergence of scientific habits among
mental-agent/experts.

∂13-Jan-83  0119	JCMa@MIT-OZ at MIT-MC 	Solomonoff-Kolmogoroff theory   
Date: Thursday, 13 January 1983, 04:11-EST
From: JCMa@MIT-OZ at MIT-MC
Subject: Solomonoff-Kolmogoroff theory
To: Minsky@MIT-OZ at MIT-MC
Cc: Phil-sci@MIT-OZ at MIT-MC
In-reply-to: The message of 12 Jan 83 22:12-EST from MINSKY at MIT-MC

    Date: Wednesday, 12 January 1983  22:12-EST
    From: MINSKY @ MIT-MC
    Subject: Occum's razor
    In-reply-to: The message of 12 Jan 1983  16:14-EST from GAVAN


    Oops.  The difference between Solomonoff and Lakatos is that
    Solomonoff's is a sort of precise, specific theory about an "optimal"
    theory of induction, whereas Lakatos is a sort of critic of common
    sense induction ideas.  

What does Solomonoff's "optimal theory of induction" have to say about
hypothesis formation, abduction?  I suspect it must be rather incomplete
if it cannot handle hypopthesis formation effectively.  Of course, this
is the sort of stuff one would expect to explain simple-minded
self-organizing processes.

∂13-Jan-83  0148	JCMa@MIT-OZ at MIT-MC 	Confounding 
Date: Thursday, 13 January 1983, 04:43-EST
From: JCMa@MIT-OZ at MIT-MC
Subject: Confounding
To: KDF@MIT-MC
Cc: phil-sci@MIT-OZ at MIT-MC
In-reply-to: The message of 13 Jan 83 01:23-EST from KDF at MIT-MC

    Date: Thursday, 13 January 1983  01:23-EST
    From: KDF @ MIT-MC
    Subject: Confounding
    In-reply-to: The message of 13 Jan 1983  00:42-EST from GAVAN


	brings us back to the problem which motivated the discussion.  How do we
	in society and mental agents in a society-of-mind consensually validate
	our beliefs and theories?

    It is far from clear that any kind of consensus about beliefs is
    needed in the society-of-mind, and if it is, that it would be anything
    like the mechanisms for human societies.  

While one wouldn't expect the instantial consensus process to be as bad
as that of societies, it seems cavalier to discount the structural
similarities required for conflict-resolution, consensus-developmnent.
What does it mean to "make up your mind?"

    Except for a few related agents, the beliefs/goals/theories/whatever
    of an agent are not ABOUT the same things as the others - if they
    were, we are left with little homonuculli!

They nevertheless do have to be able to communicate in some way.
Suppose mental agents are created under different agregate mental
regimes (e.g., the child-mind versus the adult-mind).  If the mental
agent created under the child-mind regime is based on concepts whose
meaning (intension) has shifted radically, what sort meaning (intension)
is associated with the mental agent in the adult-mind?  How is this
resolved?  If all the units are agents, it would seem that "negotiation"
and "due process" are obvious metaphors.  The problem is that the mental
agents may not even talk the same protocols:  Sounds like paradigm
conflict.  What happens then is conflict!  Maybe congnitive dissonance
and other sort of "internal" conflicts in peoples' minds are about just
this sort of thing.  Sure sounds like social metaphors (or better,
system types) help here.  

I guess the real objection is that metaphors will get you espistemic
access, but once you are there, you can develop a concrete model.  It is
this that tends to render the original metaphors obsolete -- even though
they may have been instrumental in getting there.

∂13-Jan-83  0308	ISAACSON at USC-ISI 	Peirce for message passing semantics   
Date: 13 Jan 1983 0255-PST
Sender: ISAACSON at USC-ISI
Subject: Peirce for message passing semantics
From: ISAACSON at USC-ISI
To: HEWITT at MIT-MC
Cc: phil-sci at MIT-MC, isaacson at USC-ISI
Message-ID: <[USC-ISI]13-Jan-83 02:55:16.ISAACSON>

        In-Reply-To: Your message of Thursday, 13 Jan 1983,
01:36-EST

You're probably familiar with it, but I'll throw it in anyway.
You may wish to dwell on a lot of Charles Sanders Peirce.

-- JDI


∂13-Jan-83  0717	MINSKY @ MIT-MC 	reformulation
Date: Thursday, 13 January 1983  09:43-EST
Sender: MINSKY @ MIT-OZ
From: MINSKY @ MIT-MC
To:   KDF @ MIT-OZ
Cc:   William A. Kornfeld <BAK @ MIT-OZ>, Carl Hewitt <Hewitt @ MIT-OZ>,
      phil-sci @ MIT-OZ
Subject: reformulation
In-reply-to: The message of 13 Jan 1983  01:11-EST from KDF


You don't "test" reformulations to see if they are true or false.
As always, you probe to find the range of applicibility.  You
could regard finding an inapplicable place a refutation, I suppose.

∂13-Jan-83  0825	BATALI @ MIT-MC 	Science vs Perception  
Date: Thursday, 13 January 1983  11:22-EST
Sender: BATALI @ MIT-OZ
From: BATALI @ MIT-MC
To:   GAVAN @ MIT-OZ
Cc:   phil-sci @ MIT-OZ
Subject: Science vs Perception
In-reply-to: The message of 13 Jan 1983  00:42-EST from GAVAN

    Date: Thursday, 13 January 1983  00:42-EST
    From: GAVAN

    Perception and learning are certainly integrally related to the
    philosophy of science.  In fact, one could even say that the
    philosophy of science is itself the philosophy of perception and
    learning.

This is precisely the position that I deny.  At least I hope that we
can agree that it is worthy of discussion.  Why, on the face of it,
should a process that operates over generations and many thousands of
agents have anything to do with something that takes milliseconds in a
single animal?

Possible answer: Beacuse both processes seek "truth".  I'll ignore
problems with defining or understanding truth.  But notice that there
are two aspects of seeking truth.  One is the invention of new
concepts and vocabularies, new "ways of thinking" about the world.
This is the sort of thing that Science (at least the public's view of
it) spends its time on.  But another aspect of the search for truth is
simply finding out what's going on now, in whatever vocabulary and
with whatever concepts available.

For example: the development of the Copenhagen (probabilistic)
interpretation of the Schrodinger equation is an example of the first
kind of truth-seeking.  The experimental measurement of the charge of
the electron is an example of the second kind of truth-seeking.

I claim that perception is the second kind.  A rat must describe its
environment quickly and accurately enough to tell what it should do.
It need not create any new concepts or theories in the process.  In
fact, it seems to me that some "lower" intelligences could be limited
just in their ability to create new concepts but would still be very
good at perception.

Even lots of learning might be just creating new structures in a given
and unchanging concept-set.  The actual creation of new concepts
probably happens very rarely.  So my point:  The standard conception of
science as a balls-to-the-wall dash after Truth is not the same
problem faced by a perceiving agent or certain kinds of learning
agents.  So issues raised by one enterprise might or might not cross
the boundaries.

∂13-Jan-83  0831	BAK @ MIT-MC 	Popper, again.  
Date: Thursday, 13 January 1983  11:27-EST
Sender: BAK @ MIT-OZ
From: BAK @ MIT-MC
To:   JCMa @ MIT-OZ
Cc:   phil-sci @ MIT-OZ
Subject: Popper, again.
In-reply-to: The message of 13 Jan 1983 03:39-EST from JCMa

        A one-sentence definition of falsificationism is: "A theory is accepted
        because we have failed to disprove it."  

    This specification lacks a critical condition: that it is known some
    "critical experiment" can be performed which will falsify the theory.
    Survival of the ordeal then vindicates the theory.  The question of
    proof (dis-proof) is then simply pushed off to establishing what is in
    the set "critical experiments."  Determination of the contents of this
    set requires hypothesis formation and validation.  Looks like
    falsificationalism is logically incoherent!

I agree 100%, and so do Popper and Lakatos.  1 sentence descriptions
aren't always enough, I suppose.  The 1-sentence theory is called
"naive falsificationism" which nobody took seriously for more than a
little while.


[Other topic:]

Could someone post some references to the deep results in Kolmogoroff
complexity theory?  My original feeling about it (similar to DAM's)
was that it wasn't terribly relevant to thinking because the
definition of complexity, the length of a compiled program on a given
interpreter, is strongly related to the details of the instruction
set of the machine, and probably unrelated to some "high-level" notion
of complexity.  The word "compiled" in that description is important.
I believe that in the ultimate programming language easy intuitive thoughts
will correspond to short programs.  However "ULTIMA" will have to be compiled
through a couple intermediate languages before it can be run.  I don't
see any reason to suspect that there will be much correlation between
lengths of programs in ULTIMA and lengths of Turing machine or Vax programs
that ULTIMA compiles into.  Is this a baseless criticism of that theory's
relevance to thinking?

∂13-Jan-83  0928	MINSKY @ MIT-MC 	Popper, again.    
Date: Thursday, 13 January 1983  12:21-EST
Sender: MINSKY @ MIT-OZ
From: MINSKY @ MIT-MC
To:   BAK @ MIT-OZ, MINSKY @ MIT-OZ
Cc:   JCMa @ MIT-OZ, phil-sci @ MIT-OZ
Subject: Popper, again.
In-reply-to: The message of 13 Jan 1983  11:27-EST from BAK


	My original feeling about it [Solomonoff, Kolmogoroff]
	was that it wasn't terribly relevant to thinking because
	the DEFINITION of complexity, the length of a compiled
	program on a given interpreter, is strongly related to
	the details of the instruction set of the machine, and
	probably unrelated to some "high-level" notion of
	complexity.  The word "compiled" in that description
	is important.  I believe that in the ultimate
	programming language easy intuitive thoughts will
	correspond to short programs.  However "ULTIMA" will
	have to be compiled through a couple intermediate
	languages before it can be run.  I don't see any
	reason to suspect that there will be much correlation
	between lengths of programs in ULTIMA and lengths of
	Turing machine or Vax programs that ULTIMA compiles
	into.  Is this a baseless criticism of that theory's
	relevance to thinking?   [MAIL from BAK].

What Solomonoff discovered is (i) the coupling to the Turing machine
is weak - in an asymptotic sense.  Suppose there exists some
compiler (i.e., some intermediate or high level language that helps
describe the world).  Then this complier can be described in some
finite string (to be applied to all the data that follows).

Then, if the use of that high-level language - i.e., of some
sophisticated intermediate concepts - reduces the complexity of
describing the world, and if it does this well enough to pay for the
(finite) length of its definitions, then it will yield a shorted
description than would a lower-level language - provided there is
enough world to describe.  This is because adding the constant
increment for the complier reduces everything thereafter by a
fraction of everything else's length.

In particular, If you start with Turing machine A, but Turing
machine is chronically better, you only have to add a length (ENCODE
MACHINE-A MACHINE-B) as a prologue, and this adds a fixed increment
to the world-description.

In other words the idea of "shortest description" includes all
possible abbreviations - e.g., all possible higher-level theories in
higher-level languages.  But they only come into play as the
inductive-inference method is applied to larger and larger theories.
Solomonoff and Kolmogoroff point out that if the data is random,
then higher-level languages never pay - and that becomes their
definition of randomness.

The best heuristic references is Solomonoff's "A theory of Inductive
Inference, Parts I and II.  I'll try to reproduce reprints, and ask
him where the more recent papers are.

Understand that the theory is not practical, since it requires one
to consider all codes of less than length N.  Levin tries to
discover conceptually practical versions of this.  But,
Philosophically, I consider the idea very clear and sensible -
exactly because it does seem to answer all the objections to "naive"
objections to simplicity-criteria theories of inference.  In
particular, it does deal with (i) the idea of all possible
hypotheses and (ii) the complaint that simplicity is relative to
what one assumes available at the start.

∂13-Jan-83  0940	DAM @ MIT-MC 	Statement, Truth, and Entailment,   
Date: Thursday, 13 January 1983  12:36-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   phil-sci @ MIT-OZ
Subject: Statement, Truth, and Entailment,


	I would like to continue the discussion of the notions of
"statement", "truth", and "entailment".  I freely admit that the
importance of these notions is a matter of scientific judgement and
that Marvin simply uses other higher level notions.  However Marvin
ignored my "evidence" for the importance of these notions.  I will
assume that language is closely related to our cognitive structure.
Are there any human societies in which people do not use declarative
uturances?  The existence of such declarative uterances seems to me to
be a good argument for the importance of the notion of "statement".
Are there any human societies which do not attribute truth and
falsehood to declarative statements (is the notion of a lie a
universal human notion)?
	Another argument for these notions is the intuative truth of
mathematics.  Pure mathematics has nothing to do with the real world
and yet there seems to be objective mathematical truth.  Is there any
explanation for this other than to assume an innate notion of
mathematical or "definitional" truth?  This argument is more
convincing if one takes mathematics to be prior to any formulation of
it.  It is the intuative notion of a precise argument which gave rise
to set theory not the other way around.  Even today set theory (and
first order inference) must be taken as only an approximation of true
mathematical precision which is an undefined human phenomenon.
	Finally I would like to address Carl's "message passing
semantics".  Consider "taking the meaning of the message to be the
effect it has on the subsequent behaviour of the system".  This is a
very good example of what I call computational reductionsism.  Notice
the similarity to stimulus-response definitions of meaning.  Carl goes
so far as to argue AGAINST defining truth and meaning in a way which
is independent of the computation perfomred by the system.  It seems
to me that Fuorier transforms (and FFT procedures) are best understood
in terms of REAL numbers.  Try defining the notion of a real number
in a purely computational way.  Does the absence of a computational
definition for real numbers mean that the notion of a real number
is useless in understanding programs?

	David Mc

∂13-Jan-83  0956	MINSKY @ MIT-MC 	Statement, Truth, and Entailment,
Date: Thursday, 13 January 1983  12:52-EST
Sender: MINSKY @ MIT-OZ
From: MINSKY @ MIT-MC
To:   DAM @ MIT-OZ, MINSKY @ MIT-OZ
Cc:   phil-sci @ MIT-OZ
Subject: Statement, Truth, and Entailment,
In-reply-to: The message of 13 Jan 1983  12:36-EST from DAM


I don't ignore the "evidence" that people use "propositions" and ideas
like "true" and "false".  I'm only saying that I think it is a
disease, like they used to talk about "angels" and "devils".  It is
one thing to ask - as I do - why people find these useful in everyday
life.  That is an important psychological question, and Part II of
Learning Meaning is largely concerned with this.

Similarly, people universally also believe in "self" and "will" Again,
it is important to find out why.

On the other hand, the wide distribution of such ideas does not argue
for their philosophical importance, as DAM seems to argue.

On DAM's second point, I have pointed out repeatedly that I think
there's an artifact in trying to move from mathematical truth to
general truth.  To me,  

	MATHEMATICS IS PRECISELY THAT TO WHICH LOGIC APPLIES

I'm serious.  It is dangerous, therefore, to think that mathematics is
a mere, random, illustration of the general usefulness of the idea of
"truth" as applied to other aspects of thinking and learning and
physics, etc.  My feeling is based on the way philosophy has always
had to return to mathematical and logical examples, when it couldn't
deal with anything more real.

Similarly, in DAM's complaint to Hewitt about the "notion of real
numbers", I refer again to my essays in Learning Meaning and, expanded
a little, in "Why People Think Computers Can't".  I would ask DAM,
before continuing the discussion, to explain what's wrong with my
notion that the concept of "number" - and "real number" even more -
cannot be captured in simple formal, static definitions.

I find, in particular, DAM's idea of "computational reductionism"
to be misguided, since in my own view, it is so much broader than
his "propositional reductionism".

∂13-Jan-83  1012	DAM @ MIT-MC 	Doing as as Test for Cognitive Theories. 
Date: Thursday, 13 January 1983  13:09-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   phil-sci @ MIT-OZ
Subject: Doing as as Test for Cognitive Theories.


	It seems to me that a basic problem with theories of cognition
is that they are often fuzzy and imprecise and therefore they are not
guided by clear observations in the light of solid inferences.  One
can attempt to cure this problem in one of the two following ways:

1) One can try to write a computer program which achieves some
behaviourally specified goal (plays chess, designs circuits).

2) One can directly make a precise theory.  This involves giving the
terms of the theory precise and unambiguous definitions.  The terms
should be related in precise and unambiguous ways.  Every modern
mathematician has developed an instinct for precision and knows when
some terms of a theory are imprecise.

	The first approach seems to be what Batali is talking about
when he speaks of "doing" as opposed to just knowing.  I think that
this approach is important, if no other reason than that we do in fact
need automated design systems.  However if one takes this to be the
principal criterion for precision I think one is lead down the path of
computational reductionism.
	It is important to note that the second approach has nothing
to do with axioms in first order predicate calculus.  Mathematical
precision is an undefined human phenomenon.  To understand it one must
learn to forget much of one's real-world common sense knowledge and
learn to "define" things.
	In general it is my opinion that programs are not as important
as precise theories, although precise theories can lead to the
development of programs (consider the FFT).

	David Mc

∂13-Jan-83  1114	BATALI @ MIT-MC 	Solomonov Papers  
Date: Thursday, 13 January 1983  14:07-EST
Sender: BATALI @ MIT-OZ
From: BATALI @ MIT-MC
To:   MINSKY @ MIT-OZ
Cc:   BAK @ MIT-OZ, JCMa @ MIT-OZ, phil-sci @ MIT-OZ
Subject: Solomonov Papers
In-reply-to: The message of 13 Jan 1983  12:21-EST from MINSKY


I have most of the relevent papers:

"A Formal Theory of Inductive Inference." Parts 1 and 2. Contains the
basic ideas.

"Computational Complexity and Probability Constructions" by D. Willis.
This is the most careful treatment of the issues.  Very interesting
mathematics. 

"The Complexity of Finite objects....." by Zvonkin and Levin.  A hairy
treatment of Kolmogorov complexity.

"Inductive Inference Theory" by Solomonov.  An IJCAI paper.  A good
short treatment heavy on handwaving.

Come by if you want to make copies.

∂13-Jan-83  1119	DAM @ MIT-MC 	Statement, Truth, and Entailment    
Date: Thursday, 13 January 1983  14:15-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   MINSKY @ MIT-OZ
CC:   phil-sci @ MIT-OZ
Subject: Statement, Truth, and Entailment


	I am glad that Marvin has decided to continue and am also
interseted in other opinions (though this has turned into a pretty hot
little argumentative fire).

	Marvin has asked for a response to his position that the
notion of "real number" can not be given a concise "definition".  I
have many things to say about other aspects of his last message but
since he asked for this first I will gladly make this the subject of
my first response.

	The definition of a real number does in fact have a concise
defintion (at least any mathematician would tell you this).  The
details are not even hard.  The real numbers are a totally ordered set
of points such for any two of them there is one in between and for any
subset which is bounded below there is a greatest lower bound of that
subset, and similarly for upper bounds.  It can be proven that any two
sets of points which meet these property are isomorphic and thus this
definition exactly specifies all the properties of the real numbers.

	What exactly is Marvin's objection to this definition?  I will
attempt to answer this rhetorical question; Marvin's objection is
serious and non-trivial.  The objection is (I assume, and Marvin can
correct me if needed) that this definition does not account for how a
computational system "uses" the definition or does not account for how
one "thinks" about real numbers.  His objection I claim is deeply
rooted in the computational reductionist paradigm.

	However Marvin seems to explicitly accept the basic tenants of
computational reductionism, and in a sense so do I.  Computaional
reductionism, at least as applied to cognition, is true in the same
sense that physical reductionsism is true of our physical universe.
Furthermore I do not have an accounted for the COMPUTATIONAL
properties of the above definition.  In fact I do not understand the
computational properties of this definition.  I can guess.  I
will assume that there is a mentalese, a language of thought.  This
definition corrosponds to some sentence in that language and there is
an inference procedure that runs on such sentences.
	However independent of the computational behaviour of a
mathematician when thinking about real numbers it is still
EMPIRICALLY true that mathematicians make such definitions and THINK
ABOUT REAL NUMBERS whatever that means.  If mathematicians can do it
(and they clearly do, whatever they are doing) what right does Marvin
have to tell me that I am not allowed to think about real numbers (or
truth) just because he can not come up with a computational definition
of these notions.

	David Mc

∂13-Jan-83  1124	DAM @ MIT-MC 	Statement, Truth, and Entailment    
Date: Thursday, 13 January 1983  14:19-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   MINSKY @ MIT-OZ
CC:   phil-sci @ MIT-OZ
Subject: Statement, Truth, and Entailment


	I am not argueing that the notion of statement is important
because people believe in statements, most people in the world have no
conception of what a declarative statement is.  However everyone (I
assume) USES declarative statements.  This observation is a meta
observation, a psychological observation, not a statement about what
people believe.  The truth of the matter (uhg!!) is that people speak
in declarative sentences.  This observation is independent of the
content of those sentences.  The existence of sentences is a cognitive
property of people.  Thus there is a difference between people's
belief in "angels" and "devils" and the observation that sentences are
used universally in human communication.

	David Mc

∂13-Jan-83  1252	William A. Kornfeld <BAK at MIT-OZ at MIT-MC> 	the real numbers  
Date: Thursday, 13 January 1983, 14:35-EST
From: William A. Kornfeld <BAK at MIT-OZ at MIT-MC>
Subject: the real numbers
To: dam at MIT-OZ at MIT-MC
Cc: phil-sci at MIT-OZ at MIT-MC

Actually, you only need one of the lub and glb properties, not both.
the lub can be constructed by taking the glb of the set complement of
your set with an upper bound.  This is easily shown to be the lub.
Similarly if you assume the lub property.

The real numbers are even simpler than you thought!

∂13-Jan-83  1256	MINSKY @ MIT-MC 	Statement, Truth, and Entailment 
Date: Thursday, 13 January 1983  14:38-EST
Sender: MINSKY @ MIT-OZ
From: MINSKY @ MIT-MC
To:   DAM @ MIT-OZ, MINSKY @ MIT-OZ
Cc:   phil-sci @ MIT-OZ
Subject: Statement, Truth, and Entailment
In-reply-to: The message of 13 Jan 1983  14:15-EST from DAM


	Marvin has asked for a response to his position that the
	notion of "real number" can not be given a concise "definition".

Yes.  Of course, you can always present a definition and
assert that it captures the "notion".  The Freudian slip in your
next sentence shows how easy that is:

	The definition of a real number does in fact have a concise
	defintion (at least any mathematician would tell you this).


But then you define something:

	The real numbers are a totally ordered set of points such for
	any two of them there is one in between and for any subset
	which is bounded below there is a greatest lower bound of that
	subset, and similarly for upper bounds.  It can be proven that
	any two sets of points which meet these property are
	isomorphic.

This shows something all right, but then you gratuitiously add that

	"and thus this definition exactly specifies all the properties
	of the real numbers."

What it shows is that the Dedekind construction from the rationals is
categorical.

	What exactly is Marvin's objection to this definition?

What I mean is that it doesn't capture what people mean by numbers.
As I said, things are different inside mathematics, where people
deliberately agree to use one another's definitions.  But they don't
agree to use exactly the same "notions", whatever those are.

I have the same objection to your next message in which you confuse
"sentences" ( which are moderately well-defined things) with the
"propositions" or "statements" they are alleged to relate to.  But
even the idea of "sentence" itself resists good definition, when it is
not confused with the idea of "utterance".  People in all cultures do
indeed utter things.  The idea that they utter "sentences" is a very
useful approximation, but not really much more than a policy of
classifying the utterances into ones that have the most familiar forms
- and, hypothetically, reflecting the operation of some
sentence-forming machniery.

∂13-Jan-83  1258	DAM @ MIT-MC 	Statement, Truth, and Entailment    
Date: Thursday, 13 January 1983  14:39-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   MINSKY @ MIT-OZ
Cc:   phil-sci @ MIT-OZ
Subject: Statement, Truth, and Entailment
In-reply-to: The message of 13 Jan 1983  12:52-EST from MINSKY


	I would like to respond to Marvin's attack on mathematics.  I
do not defend mathematics by saying that it is the way people think
when doing common sense reasoning, I do not believe that.  In fact
there is a school in AI I will call the "FOPC reductionists" (FOPC
stands for First Order Predicate Calculus).  The FOPC reductionists
are a far worse breed than the computational reductionists since first
order deduction is a far more limited paradigm than that of
computation in general.  A SLIGHTLY less menevolent breed are the
"deductive reductionists" which try to reduce all computation to
deduction in SOME logical system with a notion of truth.  I think that
deductive reductionism is the origin of the notion of "non-monotonic"
logic.  The idea is that "belief revision" must be reduced to
deduction seems wrong to me and a result of the belief that everything
is deduction.
	I have brought these positions up because I want to
destinguish myself from them.  I view mathematics (and precise
deduction) as a limited aspect of human cognition.  Mathematics is a
certain form of human behaviour which has little in general to do with
real world issues.  Mathematics is the study of "definitional" truths
or "pure tautologies" and is clearly only one aspect of human
cognition.  The interesting thing about mathematics is not its general
utility but the fact that it exists at all.  In fact the notion of
"tautologically true" seems to be objective amoung human
mathematicians (thought this is certainly arguable).  I take the
notion of tautological truth to be a human phenomenon which is
independent of formal descriptions of it (the best candidates proposed
so far aren't quite right).  The interesting thing about mathematics
is the existence of an objective notion of tautological truth, not
that mathematics is a model for all thought processes.

	As one final comment I would like to address Marvin's
suggestion that I am a "propositional reductionist".  I regect this
label for the simple reason that I do not take everything to be
propositions (real numbers for example are not propositions).  It is
true that I speak in declarative sentences, but who doesn't?

	David Mc

∂13-Jan-83  1301	BATALI @ MIT-MC 	Doing as as Test for Cognitive Theories.   
Date: Thursday, 13 January 1983  15:32-EST
Sender: BATALI @ MIT-OZ
From: BATALI @ MIT-MC
To:   DAM @ MIT-OZ
Cc:   phil-sci @ MIT-OZ
Subject: Doing as as Test for Cognitive Theories.
In-reply-to: The message of 13 Jan 1983  13:09-EST from DAM

    From: DAM

    1) One can try to write a computer program which achieves some
    behaviourally specified goal (plays chess, designs circuits).

    	The first approach seems to be what Batali is talking about
    when he speaks of "doing" as opposed to just knowing.  I think that
    this approach is important, if no other reason than that we do in fact
    need automated design systems.  However if one takes this to be the
    principal criterion for precision I think one is lead down the path of
    computational reductionism.

I'm not claiming that we should write programs that just do instead of
just know.  What I am claiming is that the problems faced by
intelligences are those of doing and so our programs should solve the
problems associated with doing.  It is certainly true that to do
anything one must know quite a bit.  And some of what one does is to
gain knowledge.  And some knowledge is about actions.

A "precise" theory of doing -- what I would call a theory of
intelligent action -- would explain how an agent can produce
reasonable behaviour given its knowledge and goals.

∂13-Jan-83  1308	ISAACSON at USC-ISI 	Real numbers stuff 
Date: 13 Jan 1983 1259-PST
Sender: ISAACSON at USC-ISI
Subject: Real numbers stuff
From: ISAACSON at USC-ISI
To: DAM at MIT-MC
Cc: minsky at MIT-MC, phil-sci at MIT-MC, isaacson at USC-ISI
Message-ID: <[USC-ISI]13-Jan-83 12:59:17.ISAACSON>

In-Reply-To: Your message of Thursday, 13 Jan 1983, 14:15-EST

It seems to me that questions of this sort (real numbers
"constructivism", etc.)  are addressed by the Dutch school of
Intuitionism [Brouwer, Heyting, et al]

-- JDI


p.s.  It may be interesting to note that Piaget and Heyting have
found a way to cooperate at one point (after a long antagonism)
and, I think, have written a book on their joint work; if
pressed, I'll be able to retrieve it.


∂13-Jan-83  1317	DAM @ MIT-MC 	Statement, Truth, and Entailment    
Date: Thursday, 13 January 1983  16:10-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   MINSKY @ MIT-OZ
cc:   phil-sci @ MIT-OZ
Subject: Statement, Truth, and Entailment


	...
	  The idea that they utter "sentences" is a very
	useful approximation, but not really much more than a policy of
	classifying the utterances into ones that have the most familiar forms
	- and, hypothetically, reflecting the operation of some
	sentence-forming machniery.


It seems to me that I have heard this kind of argument before ...

	David Mc

∂13-Jan-83  1332	GAVAN @ MIT-MC 	reformulation 
Date: Thursday, 13 January 1983  16:24-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   MINSKY @ MIT-OZ
Cc:   William A. Kornfeld <BAK @ MIT-OZ>, Carl Hewitt <Hewitt @ MIT-OZ>,
      KDF @ MIT-OZ, phil-sci @ MIT-OZ
Subject: reformulation
In-reply-to: The message of 13 Jan 1983  09:43-EST from MINSKY

    Date: Thursday, 13 January 1983  09:43-EST
    From: MINSKY
    Sender: MINSKY
    To:   KDF
    cc:   William A. Kornfeld <BAK>, Carl Hewitt <Hewitt>, phil-sci
    Re:   reformulation

    You don't "test" reformulations to see if they are true or false.
    As always, you probe to find the range of applicibility.  You
    could regard finding an inapplicable place a refutation, I suppose.

Or you could call it an exception.

∂13-Jan-83  1407	KDF @ MIT-MC 	Popper, again.  
Date: Thursday, 13 January 1983  17:02-EST
Sender: KDF @ MIT-OZ
From: KDF @ MIT-MC
To:   BAK @ MIT-OZ
Cc:   JCMa @ MIT-OZ, phil-sci @ MIT-OZ
Subject: Popper, again.
In-reply-to: The message of 13 Jan 1983  11:27-EST from BAK

	Just because there is no posted list of "critical
experiments" doesn't mean that the falsification view is
unrealistic.  "failure to disprove" can simply be that all
of the experiments we thought of came out in accordance with
the theory.  This is not to say as we know more new experiments
won't suggest themselves that will lay waste to an otherwise
accepted theory.  This computation is no more incoherent
than "negation by failure" - a good technique for reasoning
in the face of limited resources and incomplete knowledge.
	Although DAM will likely disagree, resource limitations
and incomplete information are factors that make the kinds of issues
addressed in non-monotonic logic crucial to understanding minds
(although this is not an endorsement of non-monotonic logic per se).

∂13-Jan-83  1432	GAVAN @ MIT-MC 	Science vs Perception, a false dichotomy    
Date: Thursday, 13 January 1983  17:06-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   BATALI @ MIT-OZ
Cc:   phil-sci @ MIT-OZ
Subject: Science vs Perception, a false dichotomy
In-reply-to: The message of 13 Jan 1983  11:22-EST from BATALI

    Date: Thursday, 13 January 1983  11:22-EST
    From: BATALI
    Sender: BATALI
    To:   GAVAN
    cc:   phil-sci
    Re:   Science vs Perception

        Date: Thursday, 13 January 1983  00:42-EST
        From: GAVAN

        Perception and learning are certainly integrally related to the
        philosophy of science.  In fact, one could even say that the
        philosophy of science is itself the philosophy of perception and
        learning.

    This is precisely the position that I deny.  At least I hope that we
    can agree that it is worthy of discussion.  Why, on the face of it,
    should a process that operates over generations and many thousands of
    agents have anything to do with something that takes milliseconds in a
    single animal?

The process that operates over generations and many thousands of
agents is called "the history of science."  Philosophers of science
sometimes draw on the history of science for examples for some point
they're trying to make.  It's important not to confuse the two.
Philosophers of science seek to explicate things like the necessary
and sufficient conditions of knowledge, the nature of perceptual
experience and its verifiability, and the philosophy of science (and
social philosophy, by the way) is intimately and inextricably
connected to the philosophy of mind.  For a discussion of both issues
the philosophy of science and the philosophy of mind (with a cognitivist
bent), see Hilary Putnam's *Reason, Truth, and History.*

    Possible answer: Beacuse both processes seek "truth".  I'll ignore
    problems with defining or understanding truth.  

You might as well, since (I contend) there are no such things as "truth"
and "falsity."  There are only instances of consensus and the lack of
consensus.

    But notice that there
    are two aspects of seeking truth.  One is the invention of new
    concepts and vocabularies, new "ways of thinking" about the world.
    This is the sort of thing that Science (at least the public's view of
    it) spends its time on.  But another aspect of the search for truth is
    simply finding out what's going on now, in whatever vocabulary and
    with whatever concepts available.

I would agree that these are ways of coming to a consensus.  So is
demonstration.

    For example: the development of the Copenhagen (probabilistic)
    interpretation of the Schrodinger equation is an example of the first
    kind of truth-seeking.  The experimental measurement of the charge of
    the electron is an example of the second kind of truth-seeking.

    I claim that perception is the second kind.  A rat must describe its
    environment quickly and accurately enough to tell what it should do.
    It need not create any new concepts or theories in the process.  

How do you know?  You can't posit that rats need not conceptualize or
hypothesize just because rats don't have the physical apparatus
necessary to articulate a concept or an hypothesis, or because you
don't have the apparatus necessary to detect and measure rat concepts
and rat theories.

    In fact, it seems to me that some "lower" intelligences could be limited
    just in their ability to create new concepts but would still be very
    good at perception.

Are you saying that percepts are independent of concepts?  If so, I'd
dispute this.  It seems to me that concepts, taken to mean
summarizations of experience, are an important source of perceptual
constraint (this is what Hegel meant by "in the thing, the
characteristics of reflection recur as existent").  It's also the
source of the fallibility of perception.  I doubt that rats are very
good at perception anyway (taking human perception as a baseline).

    Even lots of learning might be just creating new structures in a given
    and unchanging concept-set.  The actual creation of new concepts
    probably happens very rarely.  

OK, but if you add new structure to a concept-set, then in one sense you've
created a new concept.

    So my point:  The standard conception of
    science as a balls-to-the-wall dash after Truth is not the same
    problem faced by a perceiving agent or certain kinds of learning
    agents.  So issues raised by one enterprise might or might not cross
    the boundaries.

Science is by no means "a balls-to-the-wall dash after Truth."

It depends upon what issues you're talking about.  History of science
issues concerning who invented the calculus are certainly irrelevant.
But issues in the philosophy of science, such as the nature of perception,
the necessary conditions of knowledge, the means of coming to consensus,
etc., certainly ARE relevant.

Are not scientists (and everyone else in society, for that matter)
perceiving and learning agents?

∂13-Jan-83  1523	HEWITT @ MIT-XX 	Peirce for message passing semantics? 
Date: Thursday, 13 January 1983  14:28-EST
From: HEWITT @ MIT-XX
To:   ISAACSON @ USC-ISI
Cc:   Hewitt @ MIT-XX, phil-sci @ MIT-MC
Reply-to:  Hewitt at MIT-XX
Subject: Peirce for message passing semantics?
In-reply-to: The message of 13 Jan 1983  05:55-EST from ISAACSON at USC-ISI


I have read some of Peirce's stuff.  It's not clear
to me what new insigths he provides.  Perhaps
I simply haven't readthe riht citations.

∂13-Jan-83  1547	DAM @ MIT-MC 	non-monotonic logic  
Date: Thursday, 13 January 1983  18:41-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   KDF @ MIT-OZ
cc:   phil-sci @ MIT-OZ
Subject: non-monotonic logic


	...
		Although DAM will likely disagree, resource limitations
	and incomplete information are factors that make the kinds of issues
	addressed in non-monotonic logic crucial to understanding minds
	(although this is not an endorsement of non-monotonic logic per se).

On the contrary I agree completely.  In my comment on non-monotonic logic
I simply wanted to say that viewing belief revision as deduction derives
from a kind of "deductive reductionism".  Belief revision itself is very
important and in particular we must understand when and how beliefs are
retracted.  All I am saying is that this process should not be viewed
as INFERENCE.

	David Mc

∂13-Jan-83  1612	BATALI @ MIT-MC 	Science vs Perception, a TRUE dichotomy    
Date: Thursday, 13 January 1983  18:59-EST
Sender: BATALI @ MIT-OZ
From: BATALI @ MIT-MC
To:   GAVAN @ MIT-OZ
Cc:   phil-sci @ MIT-OZ
Subject: Science vs Perception, a TRUE dichotomy
In-reply-to: The message of 13 Jan 1983  17:06-EST from GAVAN


        I claim that perception is the second kind.  A rat must describe its
        environment quickly and accurately enough to tell what it should do.
        It need not create any new concepts or theories in the process.  

    How do you know?  You can't posit that rats need not conceptualize or
    hypothesize just because rats don't have the physical apparatus
    necessary to articulate a concept or an hypothesis, or because you
    don't have the apparatus necessary to detect and measure rat concepts
    and rat theories.

I am arguing that there is a difference (say) between seeing that
there is a lion in the corner and coming up with the concepts of
"lion" and "corner."  The former I would call perception, not the
latter.  A rat could have the concepts by instinct or something, but
would still have to do perception to determine what was in the corner.

Science spends some of its time making and refining concepts, and also
worries about what's "out there."  So what I am calling perception (the
finding out of what's "out there") is a part of what science does, but
it is not ALL of what science does.

∂13-Jan-83  1733	GAVAN @ MIT-MC 	Science vs Perception, a LUDICROUS dichotomy
Date: Thursday, 13 January 1983  20:27-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   BATALI @ MIT-OZ
Cc:   phil-sci @ MIT-OZ
Subject: Science vs Perception, a LUDICROUS dichotomy
In-reply-to: The message of 13 Jan 1983  18:59-EST from BATALI

    Date: Thursday, 13 January 1983  18:59-EST
    From: BATALI
    Sender: BATALI
    To:   GAVAN
    cc:   phil-sci
    Re:   Science vs Perception, a RIDICULOUS dichotomy

            I claim that perception is the second kind.  A rat must describe its
            environment quickly and accurately enough to tell what it should do.
            It need not create any new concepts or theories in the process.  

        How do you know?  You can't posit that rats need not conceptualize or
        hypothesize just because rats don't have the physical apparatus
        necessary to articulate a concept or an hypothesis, or because you
        don't have the apparatus necessary to detect and measure rat concepts
        and rat theories.

    I am arguing that there is a difference (say) between seeing that
    there is a lion in the corner and coming up with the concepts of
    "lion" and "corner."  The former I would call perception, not the
    latter.  A rat could have the concepts by instinct or something, but
    would still have to do perception to determine what was in the corner.

OK. Some ideas (concepts) are innate then (which ones? lions?
corners?).  I'm suggesting that perceptions can be used by rats and
scientists to update concepts and that both use concepts to constrain
their percepts.  A rat might not NEED to consult its concept of a
corner in order to perceive one, but if it needs to perceive one
"quickly and accurately enough to tell what it should do," as you
suggest, then (once it has a clue that what appears in its visual
field might be a corner) quick sub-conscious reference to its concept
of corner could be NEEDED to speed up the process.  The concept can
present "hypotheses" about where to look for verification that the
thing in the rat's perceptual field is indeed a corner.  Conversely,
any peculiarities of this particular corner might be added to the
concept, thus making future perceptions even quicker.

    Science spends some of its time making and refining concepts, and also
    worries about what's "out there."  So what I am calling perception (the
    finding out of what's "out there") is a part of what science does, but
    it is not ALL of what science does.

So there's no dichotomy after all.

I'm suggesting that the rat's and the scientist's percepts and
concepts depend on one another.  Where else do the data come from for
making and refining concepts?  Seeing involves two processes --
perception and observation.  Rat and scientist perceptions provide
data for conceptualization.  Concepts, in turn, provide observational
strategies.  It's analogous (sort of) to Galileo's optical concepts
providing him with a new observational strategy (the telescope), which
in turn extended his astronomical concepts.  

"In the thing, all the characteristics of reflection recur as existent."

∂13-Jan-83  1747	GAVAN @ MIT-MC 	Popper, again.
Date: Thursday, 13 January 1983  20:40-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   KDF @ MIT-OZ
Cc:   BAK @ MIT-OZ, JCMa @ MIT-OZ, phil-sci @ MIT-OZ
Subject: Popper, again.
In-reply-to: The message of 13 Jan 1983  17:02-EST from KDF

    Date: Thursday, 13 January 1983  17:02-EST
    From: KDF
    Sender: KDF
    To:   BAK
    cc:   JCMa, phil-sci
    Re:   Popper, again.

    	Just because there is no posted list of "critical
    experiments" doesn't mean that the falsification view is
    unrealistic.  

Right.  But the falsification view is unrealistic for another reason.
Any theorist can defend his/her theory by saying that an otherwise
falsifying result is just an exception.  All theories are
unfalsifiable.

∂13-Jan-83  1829	GAVAN @ MIT-MC 	Peirce for message passing pragmatics! 
Date: Thursday, 13 January 1983  21:24-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   ISAACSON @ USC-ISI
Cc:   HEWITT @ MIT-XX, jcma @ MIT-OZ, phil-sci @ MIT-MC
Subject: Peirce for message passing pragmatics!
In-reply-to: The message of 13 Jan 1983  19:19-EST from ISAACSON at USC-ISI

    I suspect that Gavan and JCMa are fluent in much of this stuff
    and may want to contribute some pertinent references.

Thanks a lot, Joel!  Don't you have anything to offer in the way of
references?

In light of what I sent yesterday about consensus, you might (or might
not) want to look at Nicholas Rescher's *Dialectics: A
Controversy-Oriented Approach to the Theory of Knowledge.* Rescher's
also written an excellent small book summarizing Peirce's philosophy
of science. (Actually, he produces two or three books per year.  Don't
ask me how.)

*Dialectics* is an effort to model the logic of debate, which would
seem to be highly relevant to your project.  Alker's recently used it
to successfully model the logic of Thucydides' Melian dialogue.

Of course, it's Hegelian, but then so is Peirce (except that Peirce is
a realist, not an idealist).

∂13-Jan-83  1832	ISAACSON at USC-ISI 	Peirce for maessage passing pragmatics!
Date: 13 Jan 1983 1619-PST
Sender: ISAACSON at USC-ISI
Subject: Peirce for maessage passing pragmatics!
From: ISAACSON at USC-ISI
To: HEWITT at MIT-XX
Cc: gavan at MIT-MC, jcma at MIT-MC, phil-sci at MIT-MC, isaacson at USC-ISI
Message-ID: <[USC-ISI]13-Jan-83 16:19:10.ISAACSON>

In-Reply-To: Your message of Thursday, 13 Jan 1983, 14:28-EST


From your brief description of your project on "Message Passing
Sematics" it appears to me that what you're trying to do could,
perhaps, be termed "Message Passing Pragmatics."  Hence the
connection to Peirce, the so-called "Father of Pragmatism."

The term "pragmatics" is bandied around in some CS circles, but,
usually not in the full Peircean, or semiotic, sense.  In
semiotics, pragmatics (as learly distinct from semantics) is a
theory of the relations between signs and those who produce or
reveive and understand them.  (Is this "message passing"
understanding in your sense?)

I suspect that Gavan and JCMa are fluent in much of this stuff
and may want to contribute some pertinent references.

-- JDI


∂13-Jan-83  1849	MINSKY @ MIT-MC 	Solomonoff et alia
Date: Thursday, 13 January 1983  21:45-EST
Sender: MINSKY @ MIT-OZ
From: MINSKY @ MIT-MC
To:   GAVAN @ MIT-OZ
Cc:   BAK @ MIT-OZ, JCMa @ MIT-OZ, phil-sci @ MIT-OZ
Subject: Solomonoff et alia
In-reply-to: The message of 13 Jan 1983  21:08-EST from GAVAN


Of course, he counts the exceptions in with the hypothesis, so when
there are too many the hypothesis isn't simple any more.

∂13-Jan-83  1908	GAVAN @ MIT-MC 	Solomonoff et alia 
Date: Thursday, 13 January 1983  21:08-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   MINSKY @ MIT-OZ
Cc:   BAK @ MIT-OZ, JCMa @ MIT-OZ, phil-sci @ MIT-OZ
Subject: Solomonoff et alia
In-reply-to: The message of 13 Jan 1983  12:21-EST from MINSKY

    Date: Thursday, 13 January 1983  12:21-EST
    From: MINSKY
    Sender: MINSKY
    To:   BAK, MINSKY
    cc:   JCMa, phil-sci
    Re:   Popper, again.

    . . .

    Philosophically, I consider the idea very clear and sensible -
    exactly because it does seem to answer all the objections to "naive"
    objections to simplicity-criteria theories of inference.  In
    particular, it does deal with (i) the idea of all possible
    hypotheses and (ii) the complaint that simplicity is relative to
    what one assumes available at the start.

This is exciting, if "true," since it would mean that the simplicity
criterion can be resurrected, thereby making unnecessary Lakatos'
reformulation (motivated by Kuhn's critique) of sophisticated
methodological falsificationism, which Feyerabend shows does not save
us from irrationalism.

∂13-Jan-83  2022	GAVAN @ MIT-MC 	Solomonoff et alia 
Date: Thursday, 13 January 1983  23:15-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   MINSKY @ MIT-OZ
Cc:   BAK @ MIT-OZ, JCMa @ MIT-OZ, phil-sci @ MIT-OZ
Subject: Solomonoff et alia
In-reply-to: The message of 13 Jan 1983  21:45-EST from MINSKY

    Date: Thursday, 13 January 1983  21:45-EST
    From: MINSKY
    Sender: MINSKY
    To:   GAVAN
    cc:   BAK, JCMa, phil-sci
    Re:   Solomonoff et alia

    Of course, he counts the exceptions in with the hypothesis, so when
    there are too many the hypothesis isn't simple any more.

Does he deal with background theories (like Galileo's optical theory)
which must be assumed as unproblematic when considering another theory
(like Galileo's astronomical theory)?  Do these complexify a theory
for him?  Is this all done with conditional probabilities, or
something more sophisticated?

∂13-Jan-83  2122	ISAACSON at USC-ISI 	Re:  Peirce for message passing pragmatics! 
Date: 13 Jan 1983 2104-PST
Sender: ISAACSON at USC-ISI
Subject: Re:  Peirce for message passing pragmatics!
From: ISAACSON at USC-ISI
To: HEWITT at MIT-XX
Cc: phil-sci at MIT-MC, isaacson at USC-ISI
Message-ID: <[USC-ISI]13-Jan-83 21:04:36.ISAACSON>


Some more stuff on Peirce.

1. Peirce's Concept of Sign, D. Greenlee, Mouton, 1973

2. Peirce's Epistemology, W. H. Davis, Martinus Nijhoff, 1972


There is a Society of Charles Sanders Peirce with certain
publications and a bibliography containing thousands of items
with hundreds added every year.

I gave it some more thought and I am inclined to believe that
Peircian "pragmatics" should be examined in the context of your
inquiry.


∂14-Jan-83  0116	John McCarthy <JMC@SU-AI>
Date: 14 Jan 1983 0112-PST
From: John McCarthy <JMC@SU-AI>
To:   phil-sci%mit-oz at MIT-MC  

	Here is an example of a possible payoff from distinguishing truth
from consensus.  Suppose we try to develop a formal "meta-epistemology"
based on a dynamical system (system evolving in time according to your
favorite formalism) called the "world" and a distinguished subsystem
called the "scientist".  We suppose that certain functions of the
"scientist" are to be interpreted assertions about the "world".
We can study the effects of various "scientific strategies" for
finding information about the world.  Some "worlds" are more knowable
than others.  Some strategies are more effective than others.  For
example, a realist might hope to prove that a strategy that confined
itself to relations among sense data would never learn certain facts
about the world that more liberal ontology could discover.  We might
even be able to prove that a "scientists" using consensual notions
of "truth" would be unable to formulate certain truths.  Probably,
before such a formal meta-epistemology can be developed, it will
be necessary to find a simpler yet relevant system to study.
To summarize, in order to model scientific and other knowledge seeking
activity, it will be necessary to distinguish what is true in
the model world from what the model scientist believes.

	My previous message was imprecise.  Godel proved that in first
order logic, validity (truth in all models) coincides with provability.
Even to formulate the result required keeping the concepts distinct.  I
should mention, however, that van Heijenoort informed me that Hilbert and
Ackermann formulated the problem in their book on logic even though
Hilbert's philosophy was nominally formalistic.


∂14-Jan-83  0202	GAVAN @ MIT-MC 	theories of truth  
Date: Friday, 14 January 1983  04:58-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   John McCarthy <JMC @ SU-AI>
Cc:   phil-sci @ mit-mc
Subject: theories of truth
In-reply-to: The message of 14 Jan 1983  02:03-EST from John McCarthy <JMC at SU-AI>

    Date: Friday, 14 January 1983  02:03-EST
    From: John McCarthy <JMC at SU-AI>
    To:   gavan, phil-sci%mit-oz at MIT-MC

    "I'm tempted to ask just what TRUTH is and what makes you think there's
    any such thing, but that's a little off the subject.  Maybe not.  To
    me, there is no truth but only consensus.  What we call "truth" is
    only what we have agreed upon, given certain conventions which we agree
    are "rational."  It seems to me that the notion of coming to a consensus
    brings us back to the problem which motivated the discussion.  How do we
    in society and mental agents in a society-of-mind consensually validate
    our beliefs and theories?"

    	The above quote from Gavan strikes me as muddled and scientifically
    unpromising.  

Well, the concept is taken from a social philosopher, Jurgen Habermas
(see his "Theories of Truth").  It may well be "scientifically
unpromising," but that of course depends on what you think science is.

    It is similar to the Vienna circle ideas of the 1920s.
    A young graduate student named Kurt Godel attended the Circle meetings
    and had a different idea.  His idea was that truth was one thing conceptually
    and what you could prove was another.  For his PhD thesis he proved that
    in the case of first order logic the two coincided.  Later he was able
    to show that in the case of the arithmetic of Principia Mathematica and
    related systems they could not coincide.  Still later he was able to
    show that the continuum hypothesis could not be disproved from the
    Godel-Bernays axioms of set theory while maintaining his belief that
    the continuum hypothesis is false.  Another young man named Alfred Tarski
    was able to show around 1930 that truth in arithmetic was not arithmetically
    definable.

Yes, yes, yes.  All this is "true" or "agreed upon" within logic, set
theory, arithmetic, etc., but how do I know that these things are
"true."  ON WHAT ARE THE AXIOMS OF MATHEMATICS GROUNDED OTHER THAN ON
SOCIAL CONVENTION?  I realize this might seem like heresy to you, and
yes, I do use mathematical techniques from time-to-time.  I even
balance my check-book from time-to-time.  I can do so because I've
agreed to follow this social convention (so you see I'm not THAT
heretical).

    	In my opinion, a person who makes a clear distinction between
    truth and what is "consensually validated" will have a better chance of
    advancing philosophy and/or artificial intelligence than someone who
    muddles them.  He might, for example, be able to show that the notions
    coincide in some cases and differ in others.

I think you might be reacting to an undesirable consequent of the
"consensus theory of truth" if taken from the perspective of the
individual.  Since I was speaking from a social perspective, this was
not intended.  The consequent for the individual is that, if truth is
what is agreed upon, then there's no room for any individual to
challenge it.  Indeed, any individual who believes the consensus
theory of truth at the individual level of analysis cannot possibly
contribute anything new, since it would necessarily not be validated
consensually and would thus be untrue.  

At the individual level of analysis, a better theory of truth would be
the "consistency theory of truth," which holds that "truth" for the
individual is what can be incorporated into the web of beliefs with a
minimal amount of damage to that network.  (Habermas disputes this,
but I don't.  See Hilary Putnam, although the idea probably originates
(in modern times, at least) with Leon Festinger).

Now the consistency theory (at the individual level) and the consensus
theory (at the social level) are by no means incompatible.  Some
individual scientist might develop a new theory about something.  For
him/her, this theory is true, since it is consistent with many other
of his/her beliefs.  But for society, it's not "true" until some sort
of consensus has been reached, at least among the members of the
particular linguistic community concerned with the problem domain.

Perhaps I should have made this distinction earlier, but we (or at
least some of us) were discussing science at the social level, for
pragmatic reasons.  Also, I raised the consensus theory to dispute
another theory of truth (the correspondence theory) implicit in some
other remarks about perception.  Anyway, I appreciate your making it
clear that I needed to elaborate. 

You've stated what you think truth is not, but you haven't stated what
you think it is.

∂14-Jan-83  0250	KDF @ MIT-MC 	Confounding
Date: Friday, 14 January 1983  05:42-EST
Sender: KDF @ MIT-OZ
From: KDF @ MIT-MC
To:   JCMa @ MIT-OZ
Cc:   phil-sci @ MIT-OZ
Subject: Confounding
In-reply-to: The message of 13 Jan 1983 04:43-EST from JCMa

	If I remember Marvin's papers, in the Society of Mind each
agent is so simple that communication is nothing like "conversation".
The reason for the architecture was to avoid having lots of little
languages and language users, by making direct connections between
agents relevant to each other.  Instilling them with the ability to
"hint, argue, persuade, cajol, etc" (to take some language from early
AI vision papers) seems to re-introduce the complexity that the theory
was trying to avoid.  I think the issue you raised about "making up
your mind" is a good one - but note that it is hard (at least for me)
to see that human societies do that very often.  Dissent, thankfully,
is always with us.  Taking the Society-of-Mind view (Which I usually
don't, because the level of abstraction above it - Doyle's sort of
stuff - seems to be more fruitful to think about at the moment), we
find a superficial similarity in that we may be uncertain about
something or other, but does that necessarily correspond to some
agents "fighting it out?"  Limited communication between agents would
appear to rule out most correspondences with human societies, if one
actually tries to push the metaphor a bit.

∂14-Jan-83  0322	JCMa@MIT-OZ at MIT-MC 	Peirce for message passing semantics?
Date: Friday, 14 January 1983, 06:18-EST
From: JCMa@MIT-OZ at MIT-MC
Subject: Peirce for message passing semantics?
To: Hewitt@MIT-XX
Cc: ISAACSON@USC-ISI, phil-sci@MIT-OZ at MIT-MC
In-reply-to: The message of 13 Jan 83 14:28-EST from HEWITT at MIT-XX

    Date: Thursday, 13 January 1983  14:28-EST
    From: HEWITT @ MIT-XX
    To:   ISAACSON @ USC-ISI
    Cc:   Hewitt @ MIT-XX, phil-sci @ MIT-MC
    Reply-to:  Hewitt at MIT-XX
    Subject: Peirce for message passing semantics?
    In-reply-to: The message of 13 Jan 1983  05:55-EST from ISAACSON at USC-ISI


    I have read some of Peirce's stuff.  It's not clear
    to me what new insigths he provides.  Perhaps
    I simply haven't readthe riht citations.

Try thinking of each symbol as an actor which knows how to act vis-a-vis
other symbols with which it interacts in its world.

∂14-Jan-83  0340	JCMa@MIT-OZ at MIT-MC 	Statement, Truth, and Entailment
Date: Friday, 14 January 1983, 06:34-EST
From: JCMa@MIT-OZ at MIT-MC
Subject: Statement, Truth, and Entailment
To: DAM@MIT-MC
Cc: phil-sci@MIT-OZ at MIT-MC
In-reply-to: The message of 13 Jan 83 14:39-EST from DAM at MIT-MC

    Mail-From: DAM created at 13-Jan-83 14:39:34
    Date: Thursday, 13 January 1983  14:39-EST
    Sender: DAM @ MIT-OZ
    From: DAM @ MIT-MC
    To:   MINSKY @ MIT-OZ
    Cc:   phil-sci @ MIT-OZ
    Subject: Statement, Truth, and Entailment
    In-reply-to: The message of 13 Jan 1983  12:52-EST from MINSKY

	    As one final comment I would like to address Marvin's
    suggestion that I am a "propositional reductionist".  I regect this
    label for the simple reason that I do not take everything to be
    propositions (real numbers for example are not propositions).  It is
    true that I speak in declarative sentences, but who doesn't?
	    David Mc

What wrong with being a "propositional reductionist," as long as their
are no propositions which you cannot reduce?  

I guess the last clause is the catch:  It is typical possible to specify
counter-example for people who advance themselves as "propositional
reductionists because their reduction schemes lose information. So, the
question is really how to reduce without losing information.

∂14-Jan-83  0342	GAVAN @ MIT-MC 	consensus
Date: Friday, 14 January 1983  06:33-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   JMC @ SU-AI
Cc:   phil-sci @ MIT-MC
Subject: consensus
In-reply-to: The message of 13 Jan 1983  14:38-EST from MINSKY

Apropos of your pooh-poohing of the consensus theory, here's something
(taken out of context) that Marvin sent to phil-sci yesterday.

    ". . . things are different inside mathematics, where people
    deliberately agree to use one another's definitions."

Do you dispute this?

∂14-Jan-83  0447	JCMa@MIT-OZ at MIT-MC 	Approximation Theory of Truth: Re: Theories Of Truth
Date: Friday, 14 January 1983, 07:42-EST
From: JCMa@MIT-OZ at MIT-MC
Subject: Approximation Theory of Truth: Re: Theories Of Truth
To: JMC@SU-AI
Cc: phil-sci@MIT-OZ at MIT-MC
In-reply-to: The message of 14 Jan 83 02:03-EST from John McCarthy <JMC at SU-AI>

    Date: 13 Jan 1983 2303-PST
    From: John McCarthy <JMC@SU-AI>
    To:   gavan at MIT-MC, phil-sci%mit-oz at MIT-MC

	    In my opinion, a person who makes a clear distinction between
    truth and what is "consensually validated" will have a better chance of
    advancing philosophy and/or artificial intelligence than someone who
    muddles them.

I agree emphatically with your assessment.  Here are some additional
considerations.

All collective knowledge (knowledge shared by some community of
speaker/understanders, e.g., scientific communities) should be
conisdered from the "consenual perspective."  To consider percceptual
Truth as Truth is to forget that our perceptions of truth change over time.
Thus, at any one time, our perception of Truth can only be an
approximation.  In principle, it is possible to generate a marginally
better theory (or a substantial better, more general theory which
subsumes the former theory).  In this view, our image of the real Truth
that lies out there is the theory which we believe our theories assymtote
towards.  However, this is an extrapolation which may or may not hold,
which is contingent on the actual course of the process.  This view is
the approxiamtion theory of truth.

The major role of hypothesis formation (abduction) in this view is
generation of new hypothesis which can be tested, and added then added
as axioms to the current theory.

Defining an epsitemology which is guaranteed to get better (debug
itself) is, in my view, what philosophy of Science is (should be) about.

∂14-Jan-83  0448	JCMa@MIT-OZ at MIT-MC 	Subject and In-Reply-To fields in messages
Date: Friday, 14 January 1983, 07:45-EST
From: JCMa@MIT-OZ at MIT-MC
Subject: Subject and In-Reply-To fields in messages
To: jmc@su-ai
Cc: phil-sci@MIT-OZ at MIT-MC

Everyone:

Please put subject and in-reply-to fields on your messages.  It
facilitates tracking conversations in the phil-sci discussion.

∂14-Jan-83  0952	DAM @ MIT-MC 	Consensus Theory of Truth 
Date: Friday, 14 January 1983  12:47-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   JMC @ SU-AI
cc:   phil-sci @ MIT-OZ
Subject: Consensus Theory of Truth

	Date: Friday, 14 January 1983  02:03-EST
	From: John McCarthy <JMC at SU-AI>
	To:   gavan, phil-sci%mit-oz at MIT-MC

	...
		In my opinion, a person who makes a clear distinction between
	truth and what is "consensually validated" will have a better chance of
	advancing philosophy and/or artificial intelligence than someone who
	muddles them.  He might, for example, be able to show that the notions
	coincide in some cases and differ in others.

	It seems to me that the right destinction is between
MATHEMATICAL and EMPIRICAL truth.  Mathematical truth consists of
DEFINITIONAL TAUTOLOGIES.  If I tell you that P implies Q and that Q,
and if I DEFINE the meaning of implies, you must conclude Q.  I
disagree with Marvin that mathematical truth is just a convention.
The "conventions" in mathematics are the agreed upon definitions, but
the truths which follow from those definitions are not agreed upon,
they are objectively true.  Furthermore I take tautological truth to
be independent of any formalization of mathematics.  I actually assume
that there is an innate inference mechanism for determining
tautological truths, and such truths are "objective" in the sense that
we all share the same innate mechanism.  I realize that this is an
extreme position to take but I think it is a defendible extreme.
	Tarskian truth valued semantics is a very good conceptual tool
in understanding mathematical truth.  Note that the Tarskian notion of
truth is a DEFINED notion while I take human tautological truth to be
a real world phenomenon.  It seems that the real world phenomenon of
mathematical truth can best be understood today in the conceptual
framework of the defined notion of Tarskian truth.
	Empirical truth is completely different.  Consider McCarthy's
scientist in a well defined dynamic universe.  The only thing the
scientist actual has contact with is behaviour and sense data so it
seems that ultimately we have to define a notion of "truth" in terms
of sense data and behaviour.  We might assume that the sense data is
in the form of sentences in some a-priori perceptual language, but
this is a big assumption.  It seems to me more likely that the
scientist constructs the language itself in response to sense data.
Given these considerations it seems to me that EMPIRICAL truth is more
likely to be achieved by consensus and not objectively present in
sense data (although an Occum's razor argument could be used to define
an objective empirical truth).

	My basic point is the diference between DEFINITIONAL truth
and EMPIRICAL or real-world truth.

	David Mc

∂14-Jan-83  1011	John McCarthy <JMC@SU-AI> 	consensus theory of truth        
Date: 14 Jan 1983 0954-PST
From: John McCarthy <JMC@SU-AI>
Subject: consensus theory of truth    
To:   gavan at MIT-MC, dam at MIT-MC,
      phil-sci%mit-oz at MIT-MC   

(If our mailer permits a  REPLY-TO  field, I don't yet know how to use it).
I am, as I suppose you suspect, an adherent of the correspondence theory of
truth, both within mathematics and outside it.  Certainly there are differences
between mathematics and the common sense world, and I expect to address these.
DAM seems not to have understood that my meta-epistemology proposal
involved the correspondence theory.  While the "scientist" in that proposal
can only learn about the "world" through his senses, we mathematicians
can study the correspondence between what he believes and what is true
of that "world".  We can study what correspondences are possible and
what strategies achieve them.


∂14-Jan-83  1339	KDF @ MIT-MC 	Interaction between theory and observation    
Date: Friday, 14 January 1983  16:39-EST
Sender: KDF @ MIT-OZ
From: KDF @ MIT-MC
To:   phil-sci @ MIT-OZ
Subject: Interaction between theory and observation
In-reply-to: The message of 14 Jan 1983  05:50-EST from The Mailer Daemon <Mailer>

   	A good example of how what you observe depends on
your theories is described by Sue Carey and Marianne Wiser
in a paper called "When Heat and Temperature Were One".
Early experiments with thermometers were tainted by recent
sucess in mechanics.  The experimenters thought of temperature
/heat as a force, so instead of measuring the level to which
mercury settled in an ice bath, they measured how fast it moved
when the thermometer was plunged into the bath!  Given their model,
it was the reasonable thing to do.  The lack of consistent results
("no theory is unfalsifiable"?) drove the search for a better
theory. 

∂14-Jan-83  1349	KDF @ MIT-MC 	Reductionism    
Date: Friday, 14 January 1983  16:49-EST
Sender: KDF @ MIT-OZ
From: KDF @ MIT-MC
To:   jcma @ MIT-OZ, phil-sci @ MIT-OZ
Subject: Reductionism

	The problem with reductionism is not, I think, that it loses
information.  Any abstraction is supposed to do precisely that.
A good example is the levels of explanation story in Marr's book.
A complex phenomena (such as vision or intelligence) requires
explaintion at several levels of detail - what you are computing, how
you compute it, and how can you implement the computation.  No level
explicitly contains EVERYTHING; the idea is that what seem to be
natural "module boundaries" are preserved.  Most reductionist I have seen
fail to convince because they blur distinctions within the phenomena that
seem important.  The connectionist theories of mind strike me as one example.

∂14-Jan-83  1519	DAM @ MIT-MC 	consensus theory of truth 
Date: Friday, 14 January 1983  18:17-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   JMC @ SU-AI
cc:   phil-sci @ MIT-OZ
Subject: consensus theory of truth


	Date: Friday, 14 January 1983  12:54-EST
	From: John McCarthy <JMC at SU-AI>

	I am, as I suppose you suspect, an adherent of the correspondence
	theory of truth, both within mathematics and outside it. ....
	DAM seems not to have understood that my meta-epistemology proposal
	involved the correspondence theory. ...

	I am very familiar with the Tarskian notion of truth and in fact
with all of the mathematical results you mentioned in your previous messages.
However I must admit that I am not familiar with this "corrospondence theory
of truth" as it might be applied to the relationship between real-world
beliefs and the ACTUAL world.  Are your REALLY proposing that the ACTUAL
world is a first order structure.  If so what is the signature of that
structure (i.e. what set of symbols does this structure provide an
interpretation for)?  I am indeed perplexed, this position seems very
strange to me.  Could you please explain.

	David Mc


∂14-Jan-83  1818	Carl Hewitt <Hewitt at MIT-OZ at MIT-ML> 	Peirce for message passing semantics? 
Date: Friday, 14 January 1983, 21:17-EST
From: Carl Hewitt <Hewitt at MIT-OZ at MIT-ML>
Subject: Peirce for message passing semantics?
To: JCMa at MIT-OZ at MIT-ML
Cc: Hewitt at MIT-XX, ISAACSON at USC-ISI, phil-sci at MIT-OZ at MIT-ML,
    Hewitt at MIT-OZ at MIT-ML
In-reply-to: The message of 14 Jan 83 06:18-EST from JCMa at MIT-OZ

    Date: Friday, 14 January 1983, 06:18-EST
    From: JCMa@MIT-OZ
    Subject: Peirce for message passing semantics?
    To: Hewitt@MIT-XX
    Cc: ISAACSON@USC-ISI, phil-sci@MIT-OZ
    In-reply-to: The message of 13 Jan 83 14:28-EST from HEWITT at MIT-XX

        Date: Thursday, 13 January 1983  14:28-EST
        From: HEWITT @ MIT-XX
        To:   ISAACSON @ USC-ISI
        Cc:   Hewitt @ MIT-XX, phil-sci @ MIT-MC
        Reply-to:  Hewitt at MIT-XX
        Subject: Peirce for message passing semantics?
        In-reply-to: The message of 13 Jan 1983  05:55-EST from ISAACSON at USC-ISI


        I have read some of Peirce's stuff.  It's not clear
        to me what new insigths he provides.  Perhaps
        I simply haven't readthe riht citations.

    Try thinking of each symbol as an actor which knows how to act vis-a-vis
    other symbols with which it interacts in its world.

This doesn't help me very much.  Do you have an example in mind?

∂14-Jan-83  1848	Carl Hewitt <Hewitt at MIT-OZ at MIT-MC> 	Confounding  
Date: Friday, 14 January 1983, 21:45-EST
From: Carl Hewitt <Hewitt at MIT-OZ at MIT-MC>
Subject: Confounding
To: KDF at MIT-MC
Cc: JCMa at MIT-OZ at MIT-MC, phil-sci at MIT-OZ at MIT-MC,
    Hewitt at MIT-OZ at MIT-MC
In-reply-to: The message of 14 Jan 83 05:42-EST from KDF at MIT-MC

    Date: Thursday, 13 January 1983  01:23-EST
    From: KDF @ MIT-MC
    To:   GAVAN @ MIT-OZ
    Cc:   BATALI @ MIT-OZ, phil-sci @ MIT-OZ
    Subject: Confounding
    In-reply-to: The message of 13 Jan 1983  00:42-EST from GAVAN


        brings us back to the problem which motivated the discussion.  How do we
        in society and mental agents in a society-of-mind consensually validate
        our beliefs and theories?

    It is far from clear that any kind of consensus about beliefs is
    needed in the society-of-mind, and if it is, that it would be anything
    like the mechanisms for human societies.  Except for a few related
    agents, the beliefs/goals/theories/whatever of an agent are not ABOUT
    the same things as the others - if they were, we are left with little
    homonuculli! ...

To me this sounds like it may be a genuine difference between the Science
of the Mind and the Science/Engineering Community Metaphors.  Perhaps
they really are incompatible!

    Date: Friday, 14 January 1983  05:42-EST
    From: KDF @ MIT-MC
    To:   JCMa @ MIT-OZ
    Cc:   phil-sci @ MIT-OZ
    Subject: Confounding
    In-reply-to: The message of 13 Jan 1983 04:43-EST from JCMa

            If I remember Marvin's papers, in the Society of Mind each
    agent is so simple that communication is nothing like "conversation".
    The reason for the architecture was to avoid having lots of little
    languages and language users, by making direct connections between
    agents relevant to each other.  Instilling them with the ability to
    "hint, argue, persuade, cajol, etc" (to take some language from early
    AI vision papers) seems to re-introduce the complexity that the theory
    was trying to avoid.

I don't think the state of the art of message-passing systems is up
to the subtlety of communication implied by "hint, cajol, etc.".
However, I think that "argue, persuade, etc." is a reasonable goal to
aim at.  This introduces a certain level of complexity, but I don't
see how that it is avoidable.

∂14-Jan-83  1853	John McCarthy <JMC@SU-AI> 	Consensus theory of truth   
Date: 14 Jan 83 1549
From: John McCarthy <JMC@SU-AI>
Subject: Consensus theory of truth   
To:   dam at MIT-MC
CC:   phil-sci%mit-oz at MIT-MC 

Subject: Consensus theory of truth
Replying to: DAM message of 1983 Jan 14, 18:17-EST
I am not very familiar with the present courses on philosophy, so perhaps
I was mistaken in supposing that "correspondence theory of truth"
has a generally accepted informal meaning.  However, I now recall that
Tarski explains it by saying that "Snow is white" is true provided
snow is white.  Thus it is assuming that there is a real world with
certain properties and sentences are true provided the propositions
to which they refer hold in the world.  Of course, this is circular,
so a theory of true propositions has to be like, for example, a theory
of electrons.  Naturally, this doesn't assume any particular set of
symbols.  This now leads me to suppose that perhaps you are not as
familiar as you think with Tarski's ideas about truth, since his work
included in the collection "Logic, semantics and metamathematics"
contains informal philosophy of truth as well as notions applicable
to first order theories.  

	In my view, a theory of truth need not begin with a definition
of truth.  As in science, generally other parts of the theory are often
more stable than the definitions of the basic concepts.  This is even
true of mathematical definitions such as those of natural number.  A
theory of truth must live with the necessity of treating the truth
of sentences that aren't defined in terms of the basic physics of the
world, and this will be complicated.  Nevertheless, besides particular
variants of the correspondence theory of truth, there is a stable
general idea that has many adherents besides the afore-mentioned Godel.


∂14-Jan-83  1931	Carl Hewitt <Hewitt at MIT-OZ at MIT-MC> 	Scientific-Engineering Community Metaphor compatible with Society of the Mind?
Date: Friday, 14 January 1983, 22:11-EST
From: Carl Hewitt <Hewitt at MIT-OZ at MIT-MC>
Subject: Scientific-Engineering Community Metaphor compatible with Society of the Mind?
To: GAVAN at MIT-MC
Cc: Carl Hewitt <Hewitt at MIT-OZ at MIT-MC>, AGRE at MIT-OZ at MIT-MC,
    batali at MIT-OZ at MIT-MC, philosophy-of-science at MIT-OZ at MIT-MC,
    Hewitt at MIT-OZ at MIT-MC
In-reply-to: The message of 13 Jan 83 02:15-EST from GAVAN at MIT-MC

    Date: Thursday, 13 January 1983  02:15-EST
    From: GAVAN @ MIT-MC
    To:   Carl Hewitt <Hewitt @ MIT-OZ>
    Cc:   AGRE @ MIT-OZ, batali @ MIT-OZ, philosophy-of-science @ MIT-OZ
    Subject: Scientific-Engineering Community Metaphor compatible with Society of the Mind?
    In-reply-to: The message of 13 Jan 1983 00:50-EST from Carl Hewitt <Hewitt>

        Date: Thursday, 13 January 1983, 00:50-EST
        From: Carl Hewitt <Hewitt>
        To:   GAVAN
        cc:   Carl Hewitt <Hewitt>, AGRE, batali, philosophy-of-science, Hewitt
        Re:   Scientific-Engineering Community Metaphor compatible with Society of the Mind?

            Date: Wednesday, 12 January 1983  03:02-EST
            From: GAVAN at MIT-MC
            Re:   Scientific-Engineering Community Metaphor compatible with Society of the Mind?

            ...
            But it seems to me that the situation is pretty anarchistic when you have all
            these competing paradigms trying to explain the same phenomena without
            intercommunicating, you've got anarchy.

        The various research programmes of Cognitive Science (behaviorism, complex
        information processing, etc.) do communicate with each and in LARGER
        ARENAS as well.  You seem to think the situation is "anarchistic"
        because there is not more communication going on.  What exactly is this
        extra communication that is missing?

    You must remember that I'm not arguing in favor of Feyerabend's
    position.  Why do you continually try to get me to defend it?  I'm
    just trying to give you an example of what he might be talking about.
    If you really want to know what his position is, you should read the
    text.  Anyway, you might be able to clear something up for me.  Where
    is the communicative effort required to effect a synthesis between
    behaviorism, cognitive science, and whatever other paradigms there
    might be in psychology?

I would contend that effecting such a synthesis is extremely difficult,
will take a long time, will therefore require enormous communicative
effort.

                    How often do cognitivists and behaviorists have joint conferences?
                    How many joint journals do they have.  Who sponsors both enterprises?

                Why should they have joint conferences or journals?  What good do you
                think it would do?  Do you think that it is a workable proposal?

            It's certainly not a workable proposal, which is my point.  If they
            have the same problem domain and they don't intercommunicate, then the
            overall state of science in that problem domain is certainly chaotic.

        Exactly what is the lack of communication that makes it "chaotic"?  Who
        in Cognitive Science should be talking to whom?

    You misunderstand me.  The lack of communication is in psychology in
    general, not in cognitive science in particular.  There's a great
    amount of normal science going on within both behaviorism and
    cognitive science, yet the cross-fertilization between the two is
    minimal.  Some of the more dogmatic members of both camps probably see
    nothing wrong with this, but, as I implied in a recent response to
    Batali on this list, the two approaches are by no means mutually
    exclusive.  But where is the cross-fertilization?

There is cross-fertilization in poplularizations such as "Psychology
Today" as well as the technical journals.  All of the camps compete
in common arenas for funding and new recruits.

    The problem domain of both paradigms is, it seems to me,
    explaining human nature (or some
    such), yet there's little or no effort to discuss cognitive hypotheses
    and results within the behavioral school and behavioral hypotheses and
    results within the cognitive school.  Don't forget Freudians and the
    Gestaltists.  Is this not anarchy?

Loooks like competing scientific programmes to me.

                Its not clear to me that agents in the Society of the Mind communicate
                using messages in any way which is analogous to communication in
                scientific-engineering communities. Do you see any direct similarities?

            It's possible that, when you refer to the communications of agents in
            the Society of the Mind, you have in mind somebody's explication of
            the metaphor with which I'm not familiar.

                        I have in mind the principles by which scientific communities 
                        ACTUALLY work.  Determining the priniciples by which            
                        scientific communities work is itself a scientific question which
                        is addressed by a scientific community.

                    The problem is that there's no agreement on how scientific communities
                    actually work.  Popper, Kuhn, Lakatos, and Feyerabend all draw on
                    empirical, historical evidence to support their incommensurable
                    theories.

                Why do your think they are incommensurable?  They seem to rationally
                discuss issues and argue with each other a lot.

            Kuhn's *Structure of Scientific Revolutions* and Feyerabend's *Against
            Method* are DIAMETRICALLY opposed to Popper's *Logic of Scientific
            Discovery*. The public arguments are a cover for private wars.  I've
            also heard stories (from reliable sources) about nasty mud-slinging
            between Popper and Lakatos at the London School of Economics (before
            the latter's death).

        Backbiting, personal animosity, attempts at cheating, etc. have always
        been a part of the scientific process.  Science/engineering communities have
        developed effective methods dealing with these phenomena so that the
        communities function effectively in spite of the problems they cause.

    I agree, but in what sense are "backbiting, personal animosity,
    attempts at cheating, etc.", rational?  Will your agents be capable of
    these sorts of performances?

I am afraid that Sprites will be capable of attempting to cheat and
even of animosity toward competing Sprites.  In a distributed
implementation with competitors implementing sprites, there is
nontrivial temptation to cheat if it advances the interests of one of
the parties. Thus I am very interested in mechanisms which scientific
communities have evolved to deal with these performances.

                    Anyway, if you want to use "the principles by which scientific
                    communities ACTUALLY work" you'll have to choose somebody's set of
                    principles.

                Obviously we will have to identify some principles like Commutavity and
                Sponsorship.  It's not clear that we have to restrict ourselves to one
                source of ideas for principles.

            Hopefully, you'll select the right ones.

        The ones we select will be subject to and grow out of a process scientific
        debate, scrutiny, and reformulation--like the one we are engaged in
        RIGHT NOW on this mailing list.  Perhaps we differ in that I have faith
        in this process whereas you do not.

    No.  I don't think I lack faith in this process.  FEYERABEND DOES, BUT
    I'M NOT HE!  If I did lack faith in this process, why would I bother
    discussing it with you?  This is the substance of Hilary Putnam's
    critique of Feyerabend in *Reason, Truth, and History* -- if
    Feyerabend truly believes his anarchist thesis then he wouldn't bother
    defending it.  Anarchism is thus self-refuting.  This is why I've said
    that Feyerabend may actually be engaged in a massive tongue-in-cheek,
    neo-Popperian critique of Lakatos.

From what I have been able to understand so far, Feyerbend's entire
argument consists of making up curses like "chaotic",
"anarchistic", etc. to descibe the activities of scientific communities.

    I think the process of "coming-to-consensus" is precisely what we need
    to talk about.  Can we come to some sort of consensus about how we
    come to consensus?  Or should we first come to a consensus on whether
    there really is something fundamentally better about the way that
    scientists do it?

Understanding how scientific communities "come to consensus" would be of
great value to us in impelenting our system.  I would greatly welcome
suggestions and references to good work.

∂14-Jan-83  1943	Carl Hewitt <Hewitt at MIT-OZ at MIT-MC> 	Statement, Truth, and Entailment,
Date: Friday, 14 January 1983, 22:41-EST
From: Carl Hewitt <Hewitt at MIT-OZ at MIT-MC>
Subject: Statement, Truth, and Entailment,
To: DAM at MIT-MC
Cc: phil-sci at MIT-OZ at MIT-MC, Hewitt at MIT-OZ at MIT-MC
In-reply-to: The message of 13 Jan 83 12:36-EST from DAM at MIT-MC

    Date: Thursday, 13 January 1983  12:36-EST
    From: DAM @ MIT-MC
    To:   phil-sci @ MIT-OZ
    Subject: Statement, Truth, and Entailment,

    ...

            Another argument for these notions is the intuative truth of
    mathematics.  Pure mathematics has nothing to do with the real world
    and yet there seems to be objective mathematical truth.  Is there any
    explanation for this other than to assume an innate notion of
    mathematical or "definitional" truth?  This argument is more
    convincing if one takes mathematics to be prior to any formulation of
    it.

I don't see any advantage to conceiving mathematics as existing prior to any
formulation of it.  Indeed I would rather conceive of it as being a
growing and evolving community. 

    It is the intuative notion of a precise argument which gave rise
    to set theory not the other way around.  Even today set theory (and
    first order inference) must be taken as only an approximation of true
    mathematical precision which is an undefined human phenomenon.
            Finally I would like to address Carl's "message passing
    semantics".  Consider "taking the meaning of the message to be the
    effect it has on the subsequent behaviour of the system".  This is a
    very good example of what I call computational reductionsism.  Notice
    the similarity to stimulus-response definitions of meaning.  Carl goes
    so far as to argue AGAINST defining truth and meaning in a way which
    is independent of the computation perfomred by the system.  It seems
    to me that Fuorier transforms (and FFT procedures) are best understood
    in terms of REAL numbers.  Try defining the notion of a real number
    in a purely computational way. ...

Attempts at formalizing the notion of real number circulated within the
community of mathematicians for a loing time before consensus was
reached for a time.  The the debate was renewed by the constructivists
who have created a new community that uses a different notion from the
one which is taught in the introductory calculus courses.  So I wold
like to understand the meaning of these formalizations not in terms of
whether they are "true" or "false" but rather the use to which they are
put by the mathematical community.




∂14-Jan-83  2009	Carl Hewitt <Hewitt at MIT-OZ at MIT-MC> 	The smallest description of the past is the best theory for the future?  
Date: Friday, 14 January 1983, 23:09-EST
From: Carl Hewitt <Hewitt at MIT-OZ at MIT-MC>
Subject: The smallest description of the past is the best theory for the future?
To: GAVAN at MIT-MC
Cc: MINSKY at MIT-OZ at MIT-MC, BAK at MIT-OZ at MIT-MC,
    JCMa at MIT-OZ at MIT-MC, phil-sci at MIT-OZ at MIT-MC,
    Hewitt at MIT-OZ at MIT-MC
In-reply-to: The message of 13 Jan 83 21:08-EST from GAVAN at MIT-MC

    Date: Thursday, 13 January 1983  21:08-EST
    From: GAVAN @ MIT-MC
    To:   MINSKY @ MIT-OZ
    Cc:   BAK @ MIT-OZ, JCMa @ MIT-OZ, phil-sci @ MIT-OZ
    Subject: Solomonoff et alia
    In-reply-to: The message of 13 Jan 1983  12:21-EST from MINSKY

        Date: Thursday, 13 January 1983  12:21-EST
        From: MINSKY
        To:   BAK, MINSKY
        cc:   JCMa, phil-sci
        Re:   Popper, again.

        . . .

        Philosophically, I consider the idea very clear and sensible -
        exactly because it does seem to answer all the objections to "naive"
        objections to simplicity-criteria theories of inference.  In
        particular, it does deal with (i) the idea of all possible
        hypotheses and (ii) the complaint that simplicity is relative to
        what one assumes available at the start.

    This is exciting, if "true," since it would mean that the simplicity
    criterion can be resurrected, thereby making unnecessary Lakatos'
    reformulation (motivated by Kuhn's critique) of sophisticated
    methodological falsificationism, which Feyerabend shows does not save
    us from irrationalism.

I dont unserstand why the Solomonoff et. al. theory doesn't have the
Hill Climbing Bug which is that it can be made to weakly track the past
but doesn't predict the future.  The description of General Relativity
(involving tensors, Riemanian Geometry, etc.) is much larger than
Newtonian Mechanics.  However many of the leading lights of physics
jumped to embrace it even BEFORE the results of a few experiments showed
problems with Newtonian Mechanics.  After the first few experiments, the
description of Newtnian Mechanics together with the known exceptions was
still SMALLER than General Relativity yet it had very FEW adherents.
Why?

∂14-Jan-83  2241	Carl Hewitt <Hewitt at MIT-OZ at MIT-MC> 	Consensus Theory of Truth   
Date: Saturday, 15 January 1983, 01:39-EST
From: Carl Hewitt <Hewitt at MIT-OZ at MIT-MC>
Subject: Consensus Theory of Truth
To: DAM at MIT-MC
Cc: JMC at SU-AI, phil-sci at MIT-OZ at MIT-MC, Hewitt at MIT-OZ at MIT-MC
In-reply-to: The message of 14 Jan 83 12:47-EST from DAM at MIT-MC

    Mail-From: DAM created at 14-Jan-83 12:47:34
    Date: Friday, 14 January 1983  12:47-EST
    Sender: DAM @ MIT-OZ
    From: DAM @ MIT-MC
    To:   JMC @ SU-AI
    cc:   phil-sci @ MIT-OZ
    Subject: Consensus Theory of Truth

            Date: Friday, 14 January 1983  02:03-EST
            From: John McCarthy <JMC at SU-AI>
            To:   gavan, phil-sci%mit-oz at MIT-MC

            ...
                    In my opinion, a person who makes a clear distinction between
            truth and what is "consensually validated" will have a better chance of
            advancing philosophy and/or artificial intelligence than someone who
            muddles them.  He might, for example, be able to show that the notions
            coincide in some cases and differ in others.

            It seems to me that the right destinction is between
    MATHEMATICAL and EMPIRICAL truth.  Mathematical truth consists of
    DEFINITIONAL TAUTOLOGIES.  If I tell you that P implies Q and that Q,
    and if I DEFINE the meaning of implies, you must conclude Q.  I
    disagree with Marvin that mathematical truth is just a convention.
    The "conventions" in mathematics are the agreed upon definitions, but
    the truths which follow from those definitions are not agreed upon,
    they are objectively true.

Whether or not a sentence follows from certain definitions can be quite
problematical.  What is your position on the truth of Fermat's Last
Theorem?  In "Proofs and Refutations:  The Logic of Mathematical
Discovery", Lakatos gives a good historical treatment of how difficult
it can be to decide whether or not the Euler formula for polyhedra follows from
the definitions for polyhedra.  "Social Processes and Proofs of Theorems" contains
more modern examples.   

    Furthermore I take tautological truth to
    be independent of any formalization of mathematics.  I actually assume
    that there is an innate inference mechanism for determining
    tautological truths, and such truths are "objective" in the sense that
    we all share the same innate mechanism.

What do you think of the various schools of mathematics such as
Intuitionism Constructivism.  Perhaps some of us have different
innate mechanisms from others.

    I realize that this is an extreme position to take but I think it
    is a defendible extreme.
            Tarskian truth valued semantics is a very good conceptual tool
    in understanding mathematical truth.  Note that the Tarskian notion of
    truth is a DEFINED notion while I take human tautological truth to be
    a real world phenomenon.  It seems that the real world phenomenon of
    mathematical truth can best be understood today in the conceptual
    framework of the defined notion of Tarskian truth.

∂15-Jan-83  0642	GAVAN @ MIT-MC 	Scientific-Engineering Community Metaphor compatible with Society of the Mind? 
Date: Saturday, 15 January 1983  09:39-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   Carl Hewitt <Hewitt @ MIT-OZ>
Cc:   AGRE @ MIT-OZ, batali @ MIT-OZ, philosophy-of-science @ MIT-OZ
Subject: Scientific-Engineering Community Metaphor compatible with Society of the Mind?
In-reply-to: The message of 14 Jan 1983 22:11-EST from Carl Hewitt <Hewitt>

    Date: Friday, 14 January 1983, 22:11-EST
    From: Carl Hewitt <Hewitt>
    To:   GAVAN
    cc:   Carl Hewitt <Hewitt>, AGRE, batali, philosophy-of-science, Hewitt
    Re:   Scientific-Engineering Community Metaphor compatible with Society of the Mind?

        Date: Thursday, 13 January 1983  02:15-EST
        From: GAVAN @ MIT-MC
        To:   Carl Hewitt <Hewitt @ MIT-OZ>
        Cc:   AGRE @ MIT-OZ, batali @ MIT-OZ, philosophy-of-science @ MIT-OZ
        Subject: Scientific-Engineering Community Metaphor compatible with Society of the Mind?
        In-reply-to: The message of 13 Jan 1983 00:50-EST from Carl Hewitt <Hewitt>

        You must remember that I'm not arguing in favor of Feyerabend's
        position.  Why do you continually try to get me to defend it?  I'm
        just trying to give you an example of what he might be talking about.
        If you really want to know what his position is, you should read the
        text.  Anyway, you might be able to clear something up for me.  Where
        is the communicative effort required to effect a synthesis between
        behaviorism, cognitive science, and whatever other paradigms there
        might be in psychology?

    I would contend that effecting such a synthesis is extremely difficult,
    will take a long time, will therefore require enormous communicative
    effort.

Well, I think that there are some people who are working at such a
synthesis.  But the requirements of normal science in all paradigms of
psychology tend to create barriers to entry into this literature.
Sure it requires an enormous communicative effort.  But if anyone is
truly interested in such a synthesis, they can use the communicative
resources they're currently wasting on the activity of normal science.

        You misunderstand me.  The lack of communication is in psychology in
        general, not in cognitive science in particular.  There's a great
        amount of normal science going on within both behaviorism and
        cognitive science, yet the cross-fertilization between the two is
        minimal.  Some of the more dogmatic members of both camps probably see
        nothing wrong with this, but, as I implied in a recent response to
        Batali on this list, the two approaches are by no means mutually
        exclusive.  But where is the cross-fertilization?

    There is cross-fertilization in poplularizations such as "Psychology
    Today" as well as the technical journals.  All of the camps compete
    in common arenas for funding and new recruits.

This move takes out of the scientific community and into the general
community.  If this is the level at which syntheses can be effected,
than the scientific-community metaphor is probably too limited.


        The problem domain of both paradigms is, it seems to me,
        explaining human nature (or some
        such), yet there's little or no effort to discuss cognitive hypotheses
        and results within the behavioral school and behavioral hypotheses and
        results within the cognitive school.  Don't forget Freudians and the
        Gestaltists.  Is this not anarchy?

    Loooks like competing scientific programmes to me.

What are the standards of the competition?  If there are none, then I suppose
Feyerabend would be justified in characterizing science as anarchistic.

        I agree, but in what sense are "backbiting, personal animosity,
        attempts at cheating, etc.", rational?  Will your agents be capable of
        these sorts of performances?

    I am afraid that Sprites will be capable of attempting to cheat and
    even of animosity toward competing Sprites.  In a distributed
    implementation with competitors implementing sprites, there is
    nontrivial temptation to cheat if it advances the interests of one of
    the parties. Thus I am very interested in mechanisms which scientific
    communities have evolved to deal with these performances.

To DEAL with them?  What do you mean?  If you mean to model these types
of performances, there's always game theory.

        No.  I don't think I lack faith in this process.  FEYERABEND DOES, BUT
        I'M NOT HE!  If I did lack faith in this process, why would I bother
        discussing it with you?  This is the substance of Hilary Putnam's
        critique of Feyerabend in *Reason, Truth, and History* -- if
        Feyerabend truly believes his anarchist thesis then he wouldn't bother
        defending it.  Anarchism is thus self-refuting.  This is why I've said
        that Feyerabend may actually be engaged in a massive tongue-in-cheek,
        neo-Popperian critique of Lakatos.

    From what I have been able to understand so far, Feyerbend's entire
    argument consists of making up curses like "chaotic",
    "anarchistic", etc. to descibe the activities of scientific communities.

Read the source material if you want to know what the argument is.  Don't
expect me to be able to reproduce faithfully an argument which I don't even
believe.  I still think Feyerabend may have his tongue in his cheek.  He
may be surreptitiously defending Popper against Lakatos.

        I think the process of "coming-to-consensus" is precisely what we need
        to talk about.  Can we come to some sort of consensus about how we
        come to consensus?  Or should we first come to a consensus on whether
        there really is something fundamentally better about the way that
        scientists do it?

    Understanding how scientific communities "come to consensus" would be of
    great value to us in impelenting our system.  I would greatly welcome
    suggestions and references to good work.

If this is true, then I strongly recommend Nicholas Rescher's
*Dialectics: A Controversy-Oriented Approach to the Theory of
Knowledge*.  I'll loan you my copy if you promise not to lose it.

∂15-Jan-83  1329	DAM @ MIT-MC 	Consensus theory of truth      
Date: Saturday, 15 January 1983  16:19-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   John McCarthy <JMC @ SU-AI>
Cc:   phil-sci%mit-oz @ MIT-MC
Subject: Consensus theory of truth   
In-reply-to: The message of 14 Jan 1983  15:49-EST from John McCarthy <JMC at SU-AI>


	I must tell you that I consider myself to much more of a
mathematician than a philosopher and therefore I an more familiar
with the mathemaitcal results of Godel, Tarsky, Kripke, etc. than
with there philosophical positions.  The "corrospondence theory"
of truth as you describe it is interesting and I certainly don't
want to force you into definitions too early (there are lots of
real world things that I know about but can't define, the human
notion of truth is one such thing).  However I think there are
enormous complexities in understanding how we arive at empirical
truth.  I do not believe that there is some a-priori language of
perception (or symbols directly interpreted by the world).  Furthermore
it seems to me that any understanding of how we OBTAIN truth must
explain how truth is derived from sense data.  It seems to me that
the most fruitful approach is to assume that the language we use
in understanding and percieving the world develops along with our
understanding.  Furthermore I have found no alternative
to some version of Occum's razor for describing how anyone OBTAINS
truth.
	In summary I don't believe that any simple corrospondence
theory can explain how humans obtain truth.  Of course to really
argue such a point we would need a more constrained notion of
the "corrospondence theory" of truth, and perhaps a more precise
account of an "Occum's razor" theory.  I don't like the Solomonoff
version for the simple reason that it ignores "sentences", "truth",
and "entailment".

	David Mc

∂15-Jan-83  1355	DAM @ MIT-MC 	Solomonoff 
Date: Saturday, 15 January 1983  16:37-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   Hewitt @ MIT-OZ
cc:   phil-sci @ MIT-OZ
Subject: Solomonoff

	Date: Friday, 14 January 1983, 23:09-EST
	From: Carl Hewitt <Hewitt>

	...
	I dont unserstand why the Solomonoff et. al. theory doesn't have the
	Hill Climbing Bug which is that it can be made to weakly track the past
	but doesn't predict the future.  The description of General Relativity
	(involving tensors, Riemanian Geometry, etc.) is much larger than
	Newtonian Mechanics.  However many of the leading lights of physics
	jumped to embrace it even BEFORE the results of a few experiments
	showed problems with Newtonian Mechanics.  After the first few
	experiments, the description of Newtnian Mechanics together with
	the known exceptions was still SMALLER than General Relativity
	yet it had very FEW adherents. Why?


	I do not consider Solomonoff complexity theory to be a good
interpretation of Occum's razor precisely because it ignores the
notions of statement, truth, and entailment.  However if one is
willing to live without these notions then Solomonoff complexity
theory seems like the right thing.  There is no hill climbing bug
because the theory assumes the ability to search the ENTIRE space
of possible explanations.  As Marvin has mentioned this makes it a
completely impractical theory.
	It is not at all clear to me that Newtonian mechanics is simpler
that general relativity.  In judging simplicity one must compare the
length of the PREMISES of the theories not the length of the arguments
that follow from these premises.  I think the major premise of general
relativity is that gravitation is locally indistinguishable from
acceleration.  This is pretty short.

	David Mc

∂15-Jan-83  1421	DAM @ MIT-MC 	Consensus Theory of Truth 
Date: Saturday, 15 January 1983  17:20-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   Hewitt @ MIT-OZ
cc:   phil-sci @ MIT-OZ
Subject: Consensus Theory of Truth


	Date: Saturday, 15 January 1983, 01:39-EST
	From: Carl Hewitt <Hewitt>

	Whether or not a sentence follows from certain definitions can be quite
	problematical.  What is your position on the truth of Fermat's Last
	Theorem?  In "Proofs and Refutations:  The Logic of Mathematical
	Discovery", Lakatos gives a good historical treatment of how difficult
	it can be to decide whether or not the Euler formula for polyhedra
	follows from the definitions for polyhedra.  "Social Processes and
	Proofs of Theorems" contains more modern examples.   

	I have read "Proofs and Refutations" by Lakatos and find the
arguments he presents quite unconvincing.  The "definitions" which are
initially developed would not be considered as such by any modern
mathematician.  The basic phenomenon which he brings to light is the
enormous tendendency to think that one is being precise when one is
actually talking about REAL WORLD notions (such as physical space).  A
real world notion such as physical space is something we are all
intimately familiar with and that we all "understand" to some extent.
The book "Proofs and Refutations" can be interpreted as simply
pointing out the dangers of assuming that a thing is totally DEFINED
just because we are experientially familiar with that thing. The
notion of our actual physical space can never be totally defined
simply because any such definition might eventually prove to be wrong.
I am not familiar with "Social Processes and Proofs of Theorems" but
one must remember to seperate simple human mistakes, like programming
bugs, from indefiniteness in mathematics.

	What do you think of the various schools of mathematics such as
	Intuitionism Constructivism.  Perhaps some of us have different
	innate mechanisms from others.

	There are indeed various schools of mathematics (intuitionism
and constructivism refer to the same school I think, while finitistism
is another, both of these are in addition to people who believe in
different versions of "normal" set theory).  There is a simple account
for this within the framework of an objective mathematical truth.  Any
one mathematician is capable of understanding the premises adopted by
any one of these schools. Constructivism for example can be easilly
understood by "normal" mathematicians by considering the statements
constructivists make to be about a Kripke structure (collection of
possible worlds) of a certain sort.  Thus I can talk about what is
true under the assumptions of a constructivist, a finitist, or
whatever.  Furthermore ALL MATHEMATICIANS AGREE ABOUT THE TRUTHS OF
THE VARIOUS APPROACHES.  They simply disagree about which approach is
"correct".  That different truths follow from different definitions
and assumptions is not surprising.  It is surprising (I think) that
all mathematicians agree on the truths of the various schools.
	I interpret the existence of various schools of mathematics as
an indication that we do not have any precise account of mathematical
truth, i.e. non of the schools really capture the notion of human
mathematical truth.  This does not mean that there is no such
objective notion.  I think I can make precise arguments and yet I do
not consider myself commited to any of the existing schools of
mathematics (though I do understand some of these schools as precise
theories of mathematical truth).

	David Mc

∂15-Jan-83  1433	John McCarthy <JMC@SU-AI> 	correspondence theory of truth   
Date: 15 Jan 83 1426
From: John McCarthy <JMC@SU-AI>
Subject: correspondence theory of truth   
To:   DAM at MIT-MC, phil-sci%mit-oz at MIT-MC  

Subject: correspondence theory of truth
Replying to: DAM message of 1983 January 15, 16:17

 	David Mc, I think I agree with what I regard as your main points,
but let me reformulate them.  A correspondence theory of what truth is,
whether formal or informal, doesn't say how truth is to be obtained.
Indeed what we obtain are beliefs, some of which are true.  The
correspondence theory of truth deliberately
does not satisfy the positivist or pragmatic criterion
that truth should be defined in terms of how it is obtained.
Thus my meta-epistemology judges truth of the "scientist"'s
statements their correspondence with the facts of the "world"
part of our meta-epistemological model.  It would be a theorem
of meta-epistemology that truth cannot be obtained without
experiment in certain epistemological systems.

	The biggest part of epistemology indeed concerns how
truth is obtained from sense data.  I also agree that Occam's razor
is essentially involved, and my proposed circumscription method
of non-monotonic reasoning can be used for formalizing Occam's
razor arguments.  (There are to common spellings: Occam and Ockham.
This discussion is the first place I've seen Occum, and I tentatively
regard it as a spelling error.)

	I share your doubts about the utility
of the Solomonoff, Kolmogorov, Chaitin approach, the neatest version
of which seems to be Chaitin's.  Asymptotically, the approach is correct,
in that the length of the shortest program for describing the facts
is independent of the initial programming language apart from an
additive constant.  However, I believe that all common sense facts
and all scientific theories produced up to the present are too short
for the asymptotic virtues of the Solomonoff approach to dominate.
I'll call it the Solomonoff approach, although I haven't read his
papers, since Minsky correctly complains that his presentation precedes
those of Kolmogorov and Chaitin, and was regrettably neglected.

Example: Suppose we have a sequence of 0's and 1's produced by a family
of rules which I will shortly give, and we wish to describe them compactly.
Solomonoff correctly points out (at least Chaitin did) that we can
start with whatever programming language  L  we like and use the length
of the program as a measure of the complexity of a particular rule of
the family.  The asymptotic behavior won't depend on the language  L,  because
if another programming language  L'  would give shorter programs, we
have only to define an interpreter for  L'  in  L  and then use  L'  for
our programs.  Since the interpreter is of fixed length, starting
with the wrong language only adds a constant to the length of the
descriptions of the family of rules.

	Suppose the sequences to be formed in the following way.  There
is a rectangular area in which a particle moves.  The particle moves
with constant velocity but when it hits a wall or an obstruction it
bounces off with the angle of reflection equal to the angle of
incidence.  The area in which the particle moves contains rectangular
obstructions and rectangular roofs that the particle can go under
without its motion being affected.  However, when the particle is
under a roof, it is invisible.  The observable sequence of 0's and 1's is formed
by sampling every second and outputting  1  if the particle is visible
and  0  if it is invisible.  The different rules for determining
sequences are determined by different initial positions and velocities
and different collections of obstacles and roofs.  We suppose that
the rectangle, the particle, and the obstacles and roofs are not
directly observable; all that the scientist can see directly is the
sequence of 0's and 1's.

	I use this example for a variety of purposes but mainly to
argue that the heuristic or AI
 essence of science cannot be summarized as the extrapolation
of the sequence of sense data.  For the present purpose the point is
this.  The initial programming language may be one well suited to
describing rules giving sequences of 0's and 1's.  However, it
probably won't be convenient to give these sequences directly except
for very simple collections of obstacles and roofs.  Instead it
will be necessary to build an interpreter for "obstacle and roof language"
and use that to go back to sequences.  As scientific domains go,
the obstacle-and-roof world is very simple, but someone confronted
with the sequences and 0's and 1's would probably have to invent
obstacles and roofs in order to explain them.  In fact, I'll almost
bet that if an important physics phenomenon produced such sequences,
and there were no a priori reasons to suggest the obstacle-and-roof
model,  the problem might persist for years, and the ultimate
inventor of the obstacle-and-roof theory would rate a Nobel prize.
The fact that the Solomonoff model gives theories of the
obstacle-and-roof world that are asymptotically optimal as the
number of obstacles and roofs goes to infinity won't be mentioned
in the Nobel citation, although in the acceptance speech, the
inventor will thank the programmer's who generated the sequences
he compared with observation.

	This is already too long, and I'll discuss the
use of circumscription to formalize Occam's razor later.

∂15-Jan-83  1436	DAM @ MIT-MC 	Statement, Truth, and Entailment    
Date: Saturday, 15 January 1983  17:32-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   Hewitt @ MIT-OZ
cc:   phil-sci @ MIT-OZ
Subject: Statement, Truth, and Entailment

	Date: Friday, 14 January 1983, 22:41-EST
	From: Carl Hewitt <Hewitt>

	I don't see any advantage to conceiving mathematics as existing
	prior to any formulation of it.  Indeed I would rather conceive
	of it as being a growing and evolving community.

The set of definitions under consideration clearly evolves and grows.

	Attempts at formalizing the notion of real number circulated within the
	community of mathematicians for a long time before consensus was
	reached for a time. ...  So I would like to understand the meaning
	of these formalizations not in terms of whether they are "true" or
	"false" but rather the use to which they are put by the mathematical
	community.

Definitions are never true or false.  Statements about defined notions
are a different matter.

	Actual physical space can never be defined because any such
definition might be proven wrong by physics.  The situation is similar
for the notion of an actual physical length.  The debate in
mathematics concerns which definitions are most useful, or most
closely approximate our notion of length.

	David Mc

∂15-Jan-83  1522	MINSKY @ MIT-MC 	Solomonoff and RElativity, etc.  
Date: Saturday, 15 January 1983  18:09-EST
Sender: MINSKY @ MIT-OZ
From: MINSKY @ MIT-MC
To:   DAM @ MIT-OZ, MINSKY @ MIT-OZ
Cc:   Hewitt @ MIT-OZ, phil-sci @ MIT-OZ
Subject: Solomonoff and RElativity, etc.
In-reply-to: The message of 15 Jan 1983  16:37-EST from DAM


RE: remark of DAM re Solomonoff.

	There is no hill climbing bug because the theory assumes the
	ability to search the ENTIRE space of possible explanations.
	As Marvin has mentioned this makes it a completely impractical
	theory.

Well, actually this is like anything else.  I didn't remark that the
theory is completely impractical.  It is not an effective procedure,
on the surface, because (i) the space is large and (ii) the halting
problem is recursively unsolvable.  However, one can use the idea to
approximate, at the usual risks.  No one would say that Newton's
theory is completely impractical because three body problems are hard.
Indeed, it could turn out that certain questions about solar systems
are recursively unsolvable (i.e., if they turned out equivalent to
solving suitable diophantine equations).

Of course, when one introduces hueristics, to apply Solomonoff's
theory (just as when one applies heuristics to Newton's theory) one
may indeed get involved in hill-climbing.

As for Hewitt's question about simplicity of Einstein vs. Newton (and
DAM's reply), this would be, as DAM would probably agree, no problem
if one stops isolating a statement of the theory from the theory.
Solomonoff would consider that both a Newtownian and and Einsteinian
would happily accept all of known and apparently sound mathematics.
So the cost of applying tensor calculus and Minkowski geometry is
zero, in a realistic sense, while the premise of not distinguishing
acceleration from gravitation is apparently a gain in simplicity -
since other than that, both use the same beliefs (or axioms) about
ordinary mathematical reasoning.

∂15-Jan-83  1527	MINSKY @ MIT-MC 	correspondence theory of truth   
Date: Saturday, 15 January 1983  18:25-EST
Sender: MINSKY @ MIT-OZ
From: MINSKY @ MIT-MC
To:   John McCarthy <JMC @ SU-AI>
Cc:   DAM @ MIT-OZ, phil-sci%mit-oz @ MIT-MC
Subject: correspondence theory of truth   
In-reply-to: The message of 15 Jan 1983  14:26-EST from John McCarthy <JMC at SU-AI>


	In fact, I'll almost bet that if an important physics
	phenomenon produced such sequences, and there were no a priori
	reasons to suggest the obstacle-and-roof model, the problem
	might persist for years, and the ultimate inventor of the
	obstacle-and-roof theory would rate a Nobel prize.

I agree.  In fact, when I mentioned that Levin was working on
approximations to Solomonoff's theory, that might make it
heuristically computable, I forgot to say that I do not believe this
can succeed in general.  In fact, I think JMC and I probably agree
that this amount to finding heuristics that would solve AI problems in
general, and it seems unlikely that there are clean schemes that find
"simplest hypotheses" reliably with modest calculations.  (I haven't
decided whether to consider the computations done by Nobelists to be
within the class of "modest" they probably are, in the sense sense
that computers may do such calculations within the next millenium.)

By the way, many of the germs of the Solomonoff-Kolmogoroff-Chaitin
ideas are in the obscure paper by McCarthy in "Automata Studies",
since in that paper he introduces the idea of making shortest
descriptions by compositions, etc., of previously shortest
descriptions.  As I recall, he proposes to introduce a novel code for
the Turing machine to test only after exhausting compositions of prior
ones.  (But I forget whether there is a weighting on whether the other
short descriptions have been "useful".  Sadly, I can't find a copy of
Automata Studies.)

∂15-Jan-83  1647	DAM @ MIT-MC 	Solomonoff and RElativity, etc.
Date: Saturday, 15 January 1983  19:46-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   MINSKY @ MIT-OZ
Cc:   Hewitt @ MIT-OZ, phil-sci @ MIT-OZ
Subject: Solomonoff and RElativity, etc.
In-reply-to: The message of 15 Jan 1983  18:09-EST from MINSKY


	Perhaps the right way to view Solomonoff is as a method
for choosing between competing theories.  While eliminates the
issue of searching the space of theories one is still left with
the halting problem (does a theory in fact predict x?, if my computations
would only terminate I would tell you).  However I agree with Marvin
(at least in the case of choosing between theories) that Solomonoff's
work could be used as a practical guide.

	As for the Newton-Einstein stuff I again agree with Marvin.
I forgot to note earlier that Einstein's theory eliminates the need
to ASSUME that gravitational mass equals inertial mass, which is a
premise of Newtonian physics.

	My difference with Marvin "only" concerns the importance
of the notions of statement and truth.

	David Mc

Subject: consensus theory of truth and Solomonoff et al.
In reply to: the discussion
	I think Lakatos's theory, based on the example of Euler's theorem,
is wrong.  Euler's theorem is an extreme case, because it concerns an
area in which mathematicians were content to follow a very intuitive
tradition until very recently.  Euler's theorem was considered to be built
on Euclidean geometry which had a rigorous superstructure and foundations
of sand until Hilbert cleaned it up.  If this were a world in which
DARPA would pay for such things, we could, with a few man years of
work, provide a proof-checking system for an area of geometry that
would admit the statement of Euler's theorem and very much more.
We could also safely offer theorem insurance at very attractive
premiums to mathematicians who would use our system.

The Chaitin version of the theory is very powerful in the sense that
if only we knew the first few thousand decimal places of his big omega,
we could make a program that would decide Fermat's last theorem, the
Riemann hypothesis and many other open problems.  It is also the only
present piece of "strongly Hardian mathematics".  G. H. Hardy once
expressed satisfaction that number theory had no practical applications
- in which he was mistaken.  The recent procedure for finding very
large primes for use in the RSA cipher uses higher order reciprocity
laws, which are quite difficult number theory, though I don't know
whether it uses Hardy's own work.  The theory of big omega is strongly Hardian
mathematics in that it contains a proof of its own lack of applications.
You can't get the first thousand places of big omega, and if you
had it the computations to settle Fermat's last theorem wouldn't
finish before the heat death.  I fear this applies to the whole
Solomonoff-Kolmogorov-Chaitin theory,  though I wouldn't bet money
on it.
∂15-Jan-83  2138	John McCarthy <JMC@SU-AI> 	correspondence model of truth    
Date: 15 Jan 83 1723
From: John McCarthy <JMC@SU-AI>
Subject: correspondence model of truth    
To:   dam at MIT-MC, phil-sci%mit-oz at MIT-MC  

Subject: correspondence model of truth
In reply to: DAM of 1983-jan-15 19:14
	I wonder if you would make an effort to find the reference to
mu-calculus.  As you have described it, it is more general than the
circumscription of my paper in one respect - allowing more general
Q's - and less general in another.  I do not restrict the way  P
appears in  Q.  On the one hand, I cannot guarantee that a model
exists, but that circumscription allows non-unique minimal models
as when I circumscribe the predicate  isblock  in the sentence
"isblock(a) or isblock(b)".  Such models are important in some of
the proposed AI applicaions.  The new version of circumscription
allows minimizing an arbitrary formula, and therefore may equal
mu-calculus in this respect.  I'm still writing the paper but let
me offer the following formula (modifying the notation so as not
to make presumptions about the reader's character set).

Q'(P) iff Q(P) and (P')(Q(P') and (x)(E(P',x) implies E(P,x))
	implies (x)(E(P,x) implies E(P',x))

Here  Q  is a defined second order predicate and E is a formula
in a predicate and a variable.  Q'(P)  then requires that  P
minimize  E(P,x)  subject to the condition  Q.  The ordering in
which the minimization occurs is then given by

	R1 lesseq R2 iff (x)(R1(x) implies R2(x).

Writing the two formulas with more characters (for those with reasonable
displays) gives

Q'(P) ≡ Q(P) ∧ (∀P'.Q(P') ∧ ∀x.(E(P',x)⊃E(P,x)) ⊃ ∀x.(E(P,x) ⊃ E(P',x))

and

λx.R1(x) ≤ λx.R2(x) ≡ ∀x.(R1(x) ⊃ R2(x)).

	From your hints, I would imagine that there is an important
difference of motivation, since I believe that for AI, the finite
cases are much more important than the use of the formalism to
define concepts like well ordering.

	The new circumscription doesn't handle "ontological development"
by itself, but I am trying to use it to design what I have been
calling "elaboration tolerant" formalisms.  Here is the example
problem I am now working on.  I have written down the problem,
but I haven't yet written down my idea of a solution.  If anyone
else regards this as an interesting problem, i.e. the problem of
generalizing a predicate to take an increased number of arguments
without losing information not refuted, I'd be glad to see what
they come up with.

	Consider at(Stanford, California) in view of the fact that,
although it is unlikely, the trustees could decide to move Stanford
to New Jersey.  In a sufficiently wide context, we might therefore
write  at(Stanford,California,s).  Our objectives are  the following:

1. We want to include  at(Stanford,California)  in a database without
even imagining that it might be movable.

2. We want to be able to generalize to wider contexts.  In these
such a generalization, it should be conceivable that Stanford is
movable.

3. When such a generalization is made, it is a non-monotonic conclusion,
that  at(Stanford,California)  is still the appropriate expression -
unless the movability of Stanford is considered.

4. Merely considering the possible movability of Stanford doesn't
prevent  at(Stanford,California) from being said.  However, we can
also say something like at(Stanford,California,s).

5. When we are forced to  at(Stanford,California,s),  the usual
properties of Stanford go along with it by suitable non-monotonic
reasoning.

6. The reasoning may force the splitting of the concept into
several.  Some refer to the University, which may move and some
refer to purely geographical features like Lake Lagunita which
continues immovable.  There is also the post office.

	In the above we have used  s  as a situation but perhaps
also as a context.  Pat Hayes and Bob Moore do things this way,
but I have always been dubious though without convincing objections.
We'll see whether we need distinct concepts.

	Here's a try at solving the problem:

	1. We reify at(Stanford,California)  so the alternate
formulations are now  holds(at(Stanford,California))  and
holds(at(Stanford,California),s).

...

∂15-Jan-83  2142	John McCarthy <JMC@SU-AI> 	consensus theory of truth and Solomonoff et al. 
Date: 15 Jan 83 1912
From: John McCarthy <JMC@SU-AI>
Subject: consensus theory of truth and Solomonoff et al. 
To:   phil-sci%mit-oz at MIT-MC  

Subject: consensus theory of truth and Solomonoff et al.
In reply to: the discussion
	I think Lakatos's theory, based on the example of Euler's theorem,
is wrong.  Euler's theorem is an extreme case, because it concerns an
area in which mathematicians were content to follow a very intuitive
tradition until very recently.  Euler's theorem was considered to be built
on Euclidean geometry which had a rigorous superstructure and foundations
of sand until Hilbert cleaned it up.  If this were a world in which
DARPA would pay for such things, we could, with a few man years of
work, provide a proof-checking system for an area of geometry that
would admit the statement of Euler's theorem and very much more.
We could also safely offer theorem insurance at very attractive
premiums to mathematicians who would use our system.

The Chaitin version of the theory is very powerful in the sense that
if only we knew the first few thousand decimal places of his big omega,
we could make a program that would decide Fermat's last theorem, the
Riemann hypothesis and many other open problems.  It is also the only
present piece of "strongly Hardian mathematics".  G. H. Hardy once
expressed satisfaction that number theory had no practical applications
- in which he was mistaken.  The recent procedure for finding very
large primes for use in the RSA cipher uses higher order reciprocity
laws, which are quite difficult number theory, though I don't know
whether it uses Hardy's own work.  The theory of big omega is strongly Hardian
mathematics in that it contains a proof of its own lack of applications.
You can't get the first thousand places of big omega, and if you
had it the computations to settle Fermat's last theorem wouldn't
finish before the heat death.  I fear this applies to the whole
Solomonoff-Kolmogorov-Chaitin theory,  though I wouldn't bet money
on it.

∂15-Jan-83  2148	KDF @ MIT-MC 	Solomonoff 
Date: Saturday, 15 January 1983  21:13-EST
Sender: KDF @ MIT-OZ
From: KDF @ MIT-MC
To:   DAM @ MIT-OZ
Cc:   Hewitt @ MIT-OZ, phil-sci @ MIT-OZ
Subject: Solomonoff
In-reply-to: The message of 15 Jan 1983  16:37-EST from DAM

	Judging simplicity just by length of premises seems to me to be
inadequate.  Complexity of inference should also be a factor.  Despite
being able to reduce chemistry to quantum mechanics (at least in principle),
no one would suggest throwing out chemistry as a "more complex" theory
for making the deductions chemists have to make.

∂15-Jan-83  2153	DAM @ MIT-MC 	correspondence theory of truth - Circumscription and Occam   
Date: Saturday, 15 January 1983  19:14-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   John McCarthy <JMC @ SU-AI>
Cc:   phil-sci%mit-oz @ MIT-MC
Subject: correspondence theory of truth - Circumscription and Occam
In-reply-to: The message of 15 Jan 1983  14:26-EST from John McCarthy <JMC at SU-AI>


	I have spent some time studying circumscription and am
interested in your ideas concerning the relationship between this and
Occam's razor (I do indeed spell poorly).  By the way I view
circumscription as a special case of the mu-calculus (I have forgotten
the reference).  The basic idea is to construct sentences of the form
(min P Q) where Q is a second order predicate (a predicate on
predicates) and P is a predicate.  (min P Q) is true just in case Q(P)
is true and there is no subset P' of P such that Q(P').  The statement
(min P Q) can be assumed, proven false, or proven true, just as any
other statement of logic.  However such statements can be used to
define the natural numbers (let P be the predicate "natural number"
and let Q be (lambda (P) P(0) and P(x) implies P(S(x)))).  Therefore
there can be no complete inference procedure.

	In the mu-calculus Q is required to be of the form:

	 (lambda (P) (forall (x) Phi(x) implies P(x)))

where Phi is required to be syntactically monotonic in P (i.e. P
occurs inside an even number of negations).  This ensures the
existence of a unique minimum.

	The mu-calculus can express the general notion of a well
founded order, something not expressible in first order logic or even
in L-omega1-omega.  On the other hand the mu-calculus can not directly
express the notion of its model being finite, something which is
expressible in constructive L-omega1-omega (but not first order logic).

	David Mc

P.S.  I like your example of the object in the roofed box.  It indeed
demonstrates the need for ontological development as well as the
answering of open "questions" which are formulated in a pre-existing
language.  Does your model of Occam's razor handle ontological
development (i.e. changes in the "basic" language in terms of which
theories are formulated and in terms of which raw "perceptions"
are converted to statements)?

∂15-Jan-83  2224	HEWITT @ MIT-OZ 	Truth-Theoretic Semantics different from Message Passing Semantics  
Date: Sunday, 16 January 1983  01:21-EST
From: HEWITT @ MIT-OZ
To:   DAM @ MIT-OZ
Cc:   Hewitt @ MIT-OZ, Hewitt @ MIT-XX, phil-sci @ MIT-OZ
Reply-to:  Hewitt at MIT-XX
Subject: Truth-Theoretic Semantics different from Message Passing Semantics
In-reply-to: The message of 15 Jan 1983  17:32-EST from DAM

    Date: Saturday, 15 January 1983  17:32-EST
    From: DAM
    Sender: DAM
    To:   Hewitt
    cc:   phil-sci
    Re:   Statement, Truth, and Entailment

    	Date: Friday, 14 January 1983, 22:41-EST
    	From: Carl Hewitt <Hewitt>

    	I don't see any advantage to conceiving mathematics as existing
    	prior to any formulation of it.  Indeed I would rather conceive
    	of it as being a growing and evolving community.

    The set of definitions under consideration clearly evolves and grows.

I can conceive of mathematics as the published communications of the
mathematical community and the mathematical meaning of theses communications
in terms of the effect that the communications has on the operation of the
community.  The chemical meaning of some of these communications would
be the effect they have on the operation of the chemist and
chemical-engineering communities.
This seems different to me than conceiving of mathematics as
somehow existing before the community of mathematicians.

    	Attempts at formalizing the notion of real number circulated within the
    	community of mathematicians for a long time before consensus was
    	reached for a time. ...  So I would like to understand the meaning
    	of these formalizations not in terms of whether they are "true" or
    	"false" but rather the use to which they are put by the mathematical
    	community.

    Definitions are never true or false.  Statements about defined notions
    are a different matter.

Would you accept that the meaning of a definition is "all of the models
which satisfy the definition"?  I can understand the meaning of the
various formulations for real numbers
(buggy, classical, constructivist) in terms of
the effect that their publication has on the activity
of the mathematical communities.  To me this seems like a big difference
in semantics.

∂15-Jan-83  2240	ISAACSON at USC-ISI 	"Obstacles-and-Roofs" Worlds 
Date: 15 Jan 1983 2233-PST
Sender: ISAACSON at USC-ISI
Subject: "Obstacles-and-Roofs" Worlds
From: ISAACSON at USC-ISI
To: JMC at SU-AI
Cc: phil-sci at MIT-MC, isaacson at USC-ISI
Message-ID: <[USC-ISI]15-Jan-83 22:33:26.ISAACSON>

In-Response-To: Your message of 15 Jan 83 1426


This may be a bit obscure, but perhaps of some interest.


I have developed a notion of "fantomark" patterns, that is,
(representations of)physical processes which are (by defintion!)
not accessible by any kind of direct observation.

Nevertheless, these processes emit certain binary strings that
can be recorded and analyzed.  [I call these "streaks"]

I went on to consider some actual machinery that can take in such
streaks arriving from an "invisible" world, which is made out of
fantomark patterns, and infers from the streaks the structure of
that invisible domain.

The properties of this machinery turns out to be, inprinciple,
not different from the properties of a certain machinery I
considered for analyzing the workings of "visible", or actual,
physical domains.

Question: Do you see any connection between this way of thinking
and your "obstacles-and-roofs" model?

-- JDI

Subject: "Obstacles-and-Roofs" Worlds
In response to: Isaacson of 1983 jan 15 10pmPST
It would be interesting if the "obstacles-and-roof world" were an
example of your "fantomark" patterns.  It would be even more
interesting if your "actual machiner" could infer obstacles-and-roofs
systems from the binary strings they produce.
∂15-Jan-83  2244	HEWITT @ MIT-OZ 	Consensus Theory of Truth   
Date: Sunday, 16 January 1983  01:43-EST
From: HEWITT @ MIT-OZ
To:   DAM @ MIT-OZ
Cc:   Hewitt @ MIT-OZ, Hewitt @ MIT-XX, phil-sci @ MIT-OZ
Reply-to:  Hewitt at MIT-XX
Subject: Consensus Theory of Truth
In-reply-to: The message of 15 Jan 1983  17:20-EST from DAM

    Date: Saturday, 15 January 1983  17:20-EST
    From: DAM
    Sender: DAM
    To:   Hewitt
    cc:   phil-sci
    Re:   Consensus Theory of Truth

    	Date: Saturday, 15 January 1983, 01:39-EST
    	From: Carl Hewitt <Hewitt>

    	Whether or not a sentence follows from certain definitions can be quite
    	problematical.  What is your position on the truth of Fermat's Last
    	Theorem?  In "Proofs and Refutations:  The Logic of Mathematical
    	Discovery", Lakatos gives a good historical treatment of how difficult
    	it can be to decide whether or not the Euler formula for polyhedra
    	follows from the definitions for polyhedra.  "Social Processes and
    	Proofs of Theorems" contains more modern examples.   

    	I have read "Proofs and Refutations" by Lakatos and find the
    arguments he presents quite unconvincing.

My experience working as a "junior mathematician" was quite different.  The
arguments and kinds of examples presented by Lakatos came up quite directly
in my work.  Does anyone have reference to follow ups and/or critiques of
"Proofs and Refuations"?

    The "definitions" which are initially developed would not be
    considered as such by any modern mathematician.

Working as a mathematician, I found definitions
to be somewhat problematical and troublesome.  For example:
for a long time I had an intuition based on programming in LISP that
"recursion is more powerful than iteration".  This intuition flew in the
face of Minsky's well known theorem that a simple iterative two counter
program is universal.  After reading a paper by Luckham and Patarson,
I was able to distill a notion of recursion and then find an example
of a recursive program that could not be programmed iteratively.
Mike then conceived of a beautiful proof technique to establish the
result.  At that point I felt that we had done a good job of mathematics
which justified my original intuitions and deepened our knowledge of the
structure of programs.  When the work was published it had the effect
of stimulating a great deal of activity on the part of other mathematicians.


    	What do you think of the various schools of mathematics such as
    	Intuitionism Constructivism.  Perhaps some of us have different
    	innate mechanisms from others.

    	There are indeed various schools of mathematics (intuitionism
    and constructivism refer to the same school I think, while finitistism
    is another, both of these are in addition to people who believe in
    different versions of "normal" set theory).  There is a simple account
    for this within the framework of an objective mathematical truth.  Any
    one mathematician is capable of understanding the premises adopted by
    any one of these schools. Constructivism for example can be easilly
    understood by "normal" mathematicians by considering the statements
    constructivists make to be about a Kripke structure (collection of
    possible worlds) of a certain sort.  Thus I can talk about what is
    true under the assumptions of a constructivist, a finitist, or
    whatever.  Furthermore ALL MATHEMATICIANS AGREE ABOUT THE TRUTHS OF
    THE VARIOUS APPROACHES.  They simply disagree about which approach is
    "correct".  That different truths follow from different definitions
    and assumptions is not surprising.  It is surprising (I think) that
    all mathematicians agree on the truths of the various schools.
    	I interpret the existence of various schools of mathematics as
    an indication that we do not have any precise account of mathematical
    truth, i.e. non of the schools really capture the notion of human
    mathematical truth.  This does not mean that there is no such
    objective notion.  I think I can make precise arguments and yet I do
    not consider myself commited to any of the existing schools of
    mathematics (though I do understand some of these schools as precise
    theories of mathematical truth).


∂15-Jan-83  2336	John McCarthy <JMC@SU-AI> 	"Obstacles-and-Roofs" Worlds
Date: 15 Jan 83 2332
From: John McCarthy <JMC@SU-AI>
Subject: "Obstacles-and-Roofs" Worlds
To:   jdi at USC-ISI, phil-sci%mit-oz at MIT-MC 

Subject: "Obstacles-and-Roofs" Worlds
In response to: Isaacson of 1983 jan 15 10pmPST
It would be interesting if the "obstacles-and-roof world" were an
example of your "fantomark" patterns.  It would be even more
interesting if your "actual machiner" could infer obstacles-and-roofs
systems from the binary strings they produce.


∂15-Jan-83  2325	KDF @ MIT-MC 	Truth-Theoretic Semantics different from Message Passing Semantics
Date: Sunday, 16 January 1983  02:26-EST
Sender: KDF @ MIT-OZ
From: KDF @ MIT-MC
To:   HEWITT @ MIT-OZ <Hewitt @ MIT-XX>
Cc:   DAM @ MIT-OZ, Hewitt @ MIT-OZ, phil-sci @ MIT-OZ
Subject: Truth-Theoretic Semantics different from Message Passing Semantics
In-reply-to: The message of 16 Jan 1983  01:21-EST from HEWITT at MIT-OZ <Hewitt at MIT-XX>

 
   "I can conceive of mathematics as the published communications of
the mathematical community and the mathematical meaning of theses
communications in terms of the effect that the communications has on
the operation of the community."

This is what worries me about the entire class of "community" metaphors
for mind - they do not describe how the members of the community come
to their conclusions.  That would seem to be the interesting part,
yet the perspective of such metaphors seem to lend no insight into
the phenomena. 

∂16-Jan-83  0151	ISAACSON at USC-ISI 	"Obstacles-and-Roofs" Machines    
Date: 16 Jan 1983 0144-PST
Sender: ISAACSON at USC-ISI
Subject: "Obstacles-and-Roofs" Machines
From: ISAACSON at USC-ISI
To: JMC at SU-AI, MINSKY at MIT-MC
Cc: phil-sci at MIT-MC, isaacson at USC-ISI
Message-ID: <[USC-ISI]16-Jan-83 01:44:25.ISAACSON>

If I understand both JMC and Minsky correctly, machinery having
the capability of inferring "obstacles-and-roofs" type models
from their emitted binary strings is of the highest conceivable
order of intelligence, i.e., that of Nobelists.

Question: Am I right in this understanding of your views?

-- JDI

The statement, which may be wrong, was based on the presumption that
the machinery had no a priori reason for assuming any kind of
"obstacles-and-roofs" model.  The more specialized the machinery,
the less one would be impressed.  A program specialized to
obstacles-and-roofs systems might or might not be impressive depending
on what was in it.  Is it that you are of a different opinion or that
you hope to impress the world?  If the latter, do the research and
publish.
∂16-Jan-83  0446	GAVAN @ MIT-MC 	"Truth" as coherence, consensus, correspondence, and simplicity.
Date: Sunday, 16 January 1983  07:45-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   MINSKY @ MIT-OZ, JMC @ SU-AI
cc:   phil-sci @ MIT-OZ
Subject: "Truth" as coherence, consensus, correspondence, and simplicity.
In-reply-to: The message of 15 Jan 1983  18:09-EST from MINSKY

    Date: Saturday, 15 January 1983  18:09-EST
    From: MINSKY
    Sender: MINSKY
    To:   DAM, MINSKY
    cc:   Hewitt, phil-sci
    Re:   Solomonoff and RElativity, etc.

After having discussed with LEVITT and JCMA some aspects of the recent
debate, I want to try to translate something that Marvin said in
Marvinese into Gavanese.  Maybe someone who understands some portion
of both can tell me whether this is roughly a good translation.

    Solomonoff would consider that both a Newtownian and and Einsteinian
    would happily accept all of known and apparently sound mathematics.

Assuming that, at the social level of analysis (within the relevant
scientific community), there is a consensus favoring the attribution
of "truth" to certain theories that remain in the background of the
discussion . . .

    So the cost of applying tensor calculus and Minkowski geometry is
    zero, in a realistic sense, while the premise of not distinguishing
    acceleration from gravitation is apparently a gain in simplicity -

. . . then a theory that provides at negligible cost a more coherent 
representation of the beliefs of the individuals engaged in this
consensual scientific community, will be accepted by the as "true."
This newly-accepted theory can then be added to the corpus of
consensually-validated background knowledge, affording new theories
the opportunity to find a niche by extending and/or adding coherence
to the structure of the "true" beliefs shared in this particular
consensual community.

End of translation.

I think Marvin's "simplicity" and my "coherence" may be extensionally
equivalent.  The problem with "simplicity" as I see it, is that it can
be taken in at least two ways:

(1) Simplicity at the global level of the belief system (Carl's way, I
    think).

(2) Simplicity at the local level of the application domain assuming
    as given certain background theories (those agreed to, by
    consensus of the community involved in the discourse) (Marvin's
    way, I think).  

Lakatos argues that (1), Duhem's Simplism, fails to serve as a
rational criterion of the "truth" or "acceptibility" of a theory
because it reduces the decision of whether to accept a theory to a
matter of fashion or taste (or, I might add, of the least resistance
to economic power).  Meaning (2) may reduce "truth" to fashion or
taste also, but only at the fringes of knowledge.  It organizes
according to some goodness-of-fit criterion.  In other words, it makes
the representation of the application domain less kludgey.

I don't know whether a norm of "coherence" would be any less ambiguous
than a norm of "simplicity," but I think it's important to distinguish
the norm of simplicity from other norms, such as "beauty," which are
often associated with simplicity (I often hear people equate "truth"
and "beauty," but I personally don't think that the former is as
culturally relative as the latter).

I'm not really sure what JMC means by "the correspondence theory of
truth," since he speaks in a language with which I'm only vaguely
familiar (thank you Marvin for attempting to be comprehensible).  If
his meaning is equivalent to what I take to be the standard
philosophical usage, then I think he's way off-base.  I take the
correspondence theory to hold (roughly) that there's some sort of
approximate correspondence between things in the world and things in
our heads.  Do you really mean this?

One argument against this version of the correspondence theory is that
it's impossible to demonstrate unless you can take some sort of
meta-position, a "God's-eye view", independent of either system.  The
consensus theory can be demonstrated by reference to findings in the
history of science and the sociology of knowledge.  The coherence
theory, it would seem, is potentially demonstrable in a learning
program.

I'd appreciate any comments on this that don't include gratuitous
remarks like "muddled" and "scientifically unpromising."

∂16-Jan-83  0506	GAVAN @ MIT-MC 	meta-epistemology and the God's-eye view.   
Date: Sunday, 16 January 1983  08:04-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   JMC @ SU-AI
cc:   phil-sci @ MIT-OZ
Subject: meta-epistemology and the God's-eye view.

It just occurred to me that what you mean by a "meta-epistemological
model" is precisely what I described as a "God's-eye view."  How do
you propose to establish the nature of this "meta-epistemological
model?"  Will you construct a "meta-meta-epistemological model?"  

∂16-Jan-83  0515	GAVAN @ MIT-MC 	Truth-Theoretic Semantics different from Message Passing Semantics   
Date: Sunday, 16 January 1983  08:12-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   KDF @ MIT-OZ
Cc:   DAM @ MIT-OZ, Hewitt @ MIT-OZ, HEWITT @ MIT-OZ <Hewitt @ MIT-XX>,
      phil-sci @ MIT-OZ
Subject: Truth-Theoretic Semantics different from Message Passing Semantics
In-reply-to: The message of 16 Jan 1983  02:26-EST from KDF

    Date: Sunday, 16 January 1983  02:26-EST
    From: KDF
    Sender: KDF
    To:   HEWITT at MIT-OZ <Hewitt at MIT-XX>
    cc:   DAM, Hewitt, phil-sci
    Re:   Truth-Theoretic Semantics different from Message Passing Semantics

    This is what worries me about the entire class of "community" metaphors
    for mind - they do not describe how the members of the community come
    to their conclusions.  That would seem to be the interesting part,
    yet the perspective of such metaphors seem to lend no insight into
    the phenomena. 

How can you explain how they come to their conclusions without
explaining how they came to their premises?  

∂16-Jan-83  0541	GAVAN @ MIT-MC 	consensus theory of truth    
Date: Sunday, 16 January 1983  08:27-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   John McCarthy <JMC @ SU-AI>
Cc:   dam @ MIT-OZ, phil-sci%mit-oz @ MIT-MC
Subject: consensus theory of truth    
In-reply-to: The message of 14 Jan 1983  12:54-EST from John McCarthy <JMC at SU-AI>

    Date: Friday, 14 January 1983  12:54-EST
    From: John McCarthy <JMC at SU-AI>
    To:   gavan, dam, phil-sci%mit-oz at MIT-MC
    Re:   consensus theory of truth    

    I am, as I suppose you suspect, an adherent of the correspondence theory of
    truth, both within mathematics and outside it.  Certainly there are differences
    between mathematics and the common sense world, and I expect to address these.
    DAM seems not to have understood that my meta-epistemology proposal
    involved the correspondence theory.  While the "scientist" in that proposal
    can only learn about the "world" through his senses, we mathematicians
    can study the correspondence between what he believes and what is true
    of that "world".  

Huh?  Is mathematics complete?  It seems to me that your program is
not to model the correspondence between what the scientist believes
and what is "true" of the world, but the correspondence between what
the scientist believes and what the mathematician believes is "true"
of the world.

    We can study what correspondences are possible and what strategies achieve 
    them.

Are you saying there's only one true model of the world, that you have
access to it, and that you seek to assess different scientific
truth-claims against this base-line?  Am I missing something here?
What's the story?  Is what is true of the world independent of what is
believed by someone?  How do you know?

∂16-Jan-83  0817	ISAACSON at USC-ISI 	A note on coherence
Date: 16 Jan 1983 0812-PST
Sender: ISAACSON at USC-ISI
Subject: A note on coherence
From: ISAACSON at USC-ISI
To: GAVAN at MIT-MC
Cc: phil-sci at MIT-MC, isaacson at USC-ISI
Message-ID: <[USC-ISI]16-Jan-83 08:12:56.ISAACSON>

In-Reply-To: Your message of Sunday, 16 Jan 1983, 07:45-EST


Without pretending to know either Marvinese or Gavanese [for I'm
lucky to know some English on top of my native Hebrew], I wish to
throw in this comment.

To my mind's eye, coherent ideas are both simple AND beautiful [I
think this is also Peirce's position] and tend to promote
coherence of beliefs within a given society.

Perhaps it is possible to associate some qualified notions of
"simplicity" and "beauty" [from the point of view of a given
society-of-minds subculture] with a notion of "coherence" which
is DUAL in a certain sense.

That is, the coherence of ideas generated by a given society is a
reflection of and reflected in the societal coherence (and its
belief system), and vice versa.  [the last phrase me be redundant
...]

As an afterthought, that is, perhaps, why people talk
figuratively about "adherents" of theories; "adherence" being a
rough synonym of "coherence".

-- JDI


∂16-Jan-83  1213	John McCarthy <JMC@SU-AI> 	correspondence theory of truth   
Date: 16 Jan 83 1154
From: John McCarthy <JMC@SU-AI>
Subject: correspondence theory of truth   
To:   gavan%mit-oz at MIT-MC
CC:   phil-sci%mit-oz at MIT-MC  

Subject: correspondence theory of truth
In reply to: GAVAN of 1983 jan 16 0827EST

"Are you saying there's only one true model of the world, that you have
access to it, and that you seek to assess different scientific
truth-claims against this base-line?  Am I missing something here?
What's the story?  Is what is true of the world independent of what is
believed by someone?  How do you know?"

	A correspondence theory of what truth is can't reasonably
be based on a claim to have guaranteed access to it.  Thus it may
be true that Napoleon died of arsenic poisoning, but perhaps no-one
will ever know it.  Moreover, independently of whether it is true,
there may or may not develop a consensus on the question.  This
view is a commonplace of folk psychology, and my opinion is that
this is another of the matters in which folk psychology is right,
and most attempts to be more sophisticated are unsuccessful.
Godel's merit was to systematically apply this idea to mathematics.
He thought that the continuum hypothesis was most likely false,
proved that ZF was inadequate to prove it false, hoped that someone
would find additional intuitively acceptable axioms that from which
it could be proved false, and wasn't sure that this would ever happen.
In my view, this is an eminently reasonable attitude except that
I never understood very well the anomalous examples that led Godel
to think the continuum hypothesis is very likely false.

	Before Godel, it was possible to believe in a correspondence
theory and to hope that the truth about every question would
eventually be determined.  If you don't separate the notion of
truth from the procedures for determining it, then you find yourself
unable to accept a proposition as meaningful unless you have
some advance assurance that it is decidable.  This is contrary to
common sense practice, which makes conjectures without prejudice
to being able to settle them.  

	(Alas for people inclined to constructivist wishful thinking,
I fear it is going to be necessary to have theories
that allow discussing propositions without prejudice to whether
they are meaningful - let alone decidable.  A hint: If we remove the
restriction in the comprehension axiom of ZF that the selection
is a subset of an existing set, we get an inconsistent naive set
theory.  However, the ZF restriction on comprehension is more
restrictive than is necessary to get something believed consistent.
I believe it is possible to play a Godel-like trick to construct
from a set theory a stronger set theory that is consistent if the
first one is and to iterate this process through constructive
ordinals as Feferman does with theories of arithmetic.  The iteration
process involves arbitrary choices that can't be specified systematically
so the limits of the iteration do not have r.e. sets of axioms.
The consequence of all this, I conjecture, will be that the set
of meaningful proppositions of set theory will not be r.e., i.e.
cannot be specified in a single theory.  This is not what one
would like to be true, but when one accepts a correspondence
theory of truth, one cannot expect that everything will turn
out in accordance with one's preferences).  As some famous scientist
put it: "The universe is not only queerer than we know; it is
queerer than we can know".

	The meta-epistemology I propose is indeed a "God's eye view",
but it doesn't presuppose or conclude that I have a God's eye view
of this world.  It proposes that we begin by studying the problem
of developing science within worlds of known structure.  For example,
it has been shown that Conway's life universe admits universal
computers that can reproduce: (all M.I.T. rumor; I don't have a
reference).  We can imagine a physicist program in the life world,
and can ask what epistemological mechanisms (i.e. research strategies)
if any would permit a life world physicist to determine that the
fundamental physics of his world was that of a cellular automaton and
Conway's automaton in particular.  Besides doing examples of varying
complexity, we can develop mathematical theories relating properties
of the world and the subsystems regarded as scientists to what facts
about the world these scientists can determine.  After developing
such theories, we can try to apply them to our own world.  My
expectation would be that it would be discovered that consensus based
strategies would either be impossible to define rigorously or would
rarely work or would be unnecessarily complicated and simplify down
to correspondence based strategies.

	Finally, my remarks about "muddled" and "scientifically unpromising"
were intended to suggest that it is possible and necessary to do
a lot better than this discussion has been doing.  Perhaps I'm
mistaken in this opinion of much of the discussion.

	There is an eight page article entitled "The correspondence theory
of truth" in Edwards's "Encyclopedia of Philosophy".  It begins with
Plato, includes the Stoics, and in modern times includes G. E. Moore,
Russell (who coined the term) and Wittgenstein.  It ends with Tarski.
I have to confess I didn't get much out of the article, which is by
A. N. Prior, the developer of tense logic.  To put it in hacker terms:
"The program has so many bugs and such bad comments, that it
seems better to chuck it and start over"  - except of course for Tarski's
technical results.  Others may have better luck with it, and there is
a bibliography.  The Encyclopedia is generally excellent and is sometimes
available as Book-of-the-Month premium for joining - worth it if you
are strong minded enough to subsequently buy the bare minimum of four
books.

∂16-Jan-83  1305	DAM @ MIT-MC 	Truth-Theoretic Semantics different from Message Passing Semantics
Date: Sunday, 16 January 1983  15:55-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   Hewitt @ MIT-OZ
cc:   phil-sci @ MIT-OZ
Subject: Truth-Theoretic Semantics different from Message Passing Semantics


	Date: Sunday, 16 January 1983  01:21-EST
	From: HEWITT at MIT-OZ <Hewitt at MIT-XX>

	...
	Would you accept that the meaning of a definition is "all of the
	models which satisfy the definition"?  I can understand the meaning of
	the various formulations for real numbers (buggy, classical,
	constructivist) in terms of the effect that their publication has on
	the activity of the mathematical communities.  To me this seems like a
	big difference in semantics.

	I do consider the "meaning" of a defintion to be a predicate on
mathematical structures (or if you wish a set of structures, the set
satisfying the definition).  Further I agree that your notion of meaning
is very different.  I find your notion ill-defined, very hard to think
about, extremely non-modular (the meaning of something could depend on
anything), and reminiscent of behaviourism.  Do you think that it is
impossible to define the natural numbers in a simple, precise, and
modular manner?  What of a definition for a simple finite state machine?

	David Mc

∂16-Jan-83  1315	DAM @ MIT-MC 	Objectivity of Mathematics
Date: Sunday, 16 January 1983  16:10-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   Hewitt @ MIT-OZ
cc:   phil-sci @ MIT-OZ
Subject: Objectivity of Mathematics


	Date: Sunday, 16 January 1983  01:43-EST
	From: HEWITT at MIT-OZ <Hewitt at MIT-XX>

	...
	For a long time I had an intuition based on programming in LISP that
	"recursion is more powerful than iteration".  This intuition flew in
	the face of Minsky's well known theorem that a simple iterative two
	counter program is universal.  After reading a paper by Luckham and
	Patarson, I was able to distill a notion of recursion and then find an
	example of a recursive program that could not be programmed
	iteratively.  Mike then conceived of a beautiful proof technique to
	establish the result. ...

	I do not want anyone to get the idea that you refuted Minsky's
established theorem, you and Mike simply found an alternative definition
for what it means for two computaional systems to be "equivalent".  Of
course I agree that finding useful definitions is very important but
this is not an example supporting Lakatos's claims.

	David Mc

∂16-Jan-83  1359	DAM @ MIT-MC 	Consensus Theory of Truth 
Date: Sunday, 16 January 1983  16:59-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   phil-sci @ MIT-OZ
cc:   GAVAN @ MIT-OZ, JCM @ SU-AI
Subject: Consensus Theory of Truth


	I agree with McCarthy that the more precise we can make this discussion
the better (although we should not let the desire for precision control
our choice of subject matter, and in the absence of "good" precise
models we should feel free to express intuitions).  Lakatos's objections
to simplism (that it leads to subjectivism) are unfounded when one looks
at the mathematical details of Solomonoff et. al.'s theory.  If one theory
is simpler than another it is objectively simpler, the notion of "simpler"
has a precise (totally defined) meaning in this work.  It is important to
understand mathematical detail when mathematical models are around.
	I think that the corrosponence theory of truth works fine for
mathematical truth (and this probably why its adherents are largely
mathematicians).  However where empirical truth is concerned it
seems to me that one is best off DEFINING the world to be behaviour
and sense data (this is not a mathematical definition).  We can never
get that god's eye view anyway.  Of course theories can hypothesize
that the world is a certain way (points moving under roofs) and there
may be some such theory which can never be improved upon (the ultimate
unified field theory).
	Consider the following question concerning the corrospondence
theory.  Consider a world which is the spacial Fourier transform of our's.
In this world are beings which are Fourier transforms of us.  Furthermore
this transform corrospondence holds for all time.  Is there any sense in
which "our" world is different from "that" world?  Are worlds which in
principle produce the same sense data really different worlds?

	David Mc

∂16-Jan-83  1506	John McCarthy <JMC@SU-AI>
Date: 16 Jan 83 1203
From: John McCarthy <JMC@SU-AI>
To:   jdi%isi at SU-SCORE
CC:   phil-sci%mit-oz at MIT-MC

The statement, which may be wrong, was based on the presumption that
the machinery had no a priori reason for assuming any kind of
"obstacles-and-roofs" model.  The more specialized the machinery,
the less one would be impressed.  A program specialized to
obstacles-and-roofs systems might or might not be impressive depending
on what was in it.  Is it that you are of a different opinion or that
you hope to impress the world?  If the latter, do the research and
publish.


∂16-Jan-83  1540	BATALI @ MIT-MC 	Consensus Theory of Truth   
Date: Sunday, 16 January 1983  17:47-EST
Sender: BATALI @ MIT-OZ
From: BATALI @ MIT-MC
To:   DAM @ MIT-OZ
Cc:   GAVAN @ MIT-OZ, JCM @ SU-AI, phil-sci @ MIT-OZ
Subject: Consensus Theory of Truth
In-reply-to: The message of 16 Jan 1983  16:59-EST from DAM

    Date: Sunday, 16 January 1983  16:59-EST
    From: DAM

    Lakatos's objections
    to simplism (that it leads to subjectivism) are unfounded when one looks
    at the mathematical details of Solomonoff et. al.'s theory.

In fact, I think that this is the main value of the work, the thing
that makes Marvin refer to the post-Solomonov era as a time for
reformulating some philosophy of science ideas.  But it does not solve
very many of the problems -- it just shows that a notion of "simpler"
can be given an objective treatment.  There are still big problems in,
for example, finding the simpler formulation; recognizing that it is
simpler; convincing others that it is simpler and so on.  These
problems are what scientists do from day to day and the view of
science as a communicating community may be more valuable in working
them out.  In fact, despite the sucess of the Solomonov approach, it
might be worthwhile to treat Occam's razor not as an objectively
defined notion, but rather as a roughly defined high-level goal.  That
is: "Because it is simpler" is allowed as a valid reason in a
scientific argument.  Showing that it is indeed simpler will take more
argument, and those arguments can take many forms, from Solomonov to
appeals to "elegance."  But the point is that the first justification
of the theory is simplicity -- which is then itself justified.  This
approach is essentially that of Doyle, and I take it to be very much
different from that of mathematics, in which to introduce a term like
"simple" one must exhaustively define it.  In Doyle's approach,
statements are justified not by the definitions of the included terms
(though such definitions, if they exist, will play a part) but by the
support they get from other statements.

    	I think that the corrosponence theory of truth works fine for
    mathematical truth (and this probably why its adherents are largely
    mathematicians).

I don't think so.  The correspondence theory of truth says, at bare
bottom, that the truth of statements depends on the way the world is.
Mathematical truth is precisely that which does not depend on the
world at all.  An example:  How do I tell if the following statements
are true?

    1.  There is an infinite number of primes.
    2.  My cow regurgitated the broccoli.

For statement 1. I look at the definitions of prime and infinite and
so on.  For statement 2. I may look at "definitions" of "Cow" and
"broccoli" and the rest, but I will never know the fact of the matter
until I look at the world and see what happened.

I would take the COHERENCE view of truth -- in which the truth of a
member of a set of statements depends only on properties of the
statements and relations among them -- to be much more appealing to a
math person because the notion of coherence might be more easily
formulated mathematically.

Perhaps the best way to go is to agree that coherence is important,
but there is some objective "world" that is the ultimate arbiter of
the truth of statements.  Arbitration consists in checking sense data
against predictions.

    However where empirical truth is concerned it
    seems to me that one is best off DEFINING the world to be behaviour
    and sense data.

It seems to me that the world has cows and clouds and atoms and minds
and everything.  Defining all that away is a big price to pay for
results that aren't in yet.  And why can't we understand the world in
terms of cows and clouds?  I think that we do, as people.  So why
don't we try to understand (as AIers) how we (as people) understand
the world in those realistic terms?

    	Consider the following question concerning the corrospondence
    theory.  Consider a world which is the spacial Fourier transform of our's.
    In this world are beings which are Fourier transforms of us.

What are we that someone can take Fourier transforms of us?  How do
you take the Fourier transform of a cow?  Does it lactate in the
frequency domain?  Does it moo milk?

∂16-Jan-83  1600	ISAACSON at USC-ISI 	"O&R" machines
Date: 16 Jan 1983 1551-PST
Sender: ISAACSON at USC-ISI
Subject: "O&R" machines
From: ISAACSON at USC-ISI
To: JMC at SU-AI
Cc: phil-sci at MIT-MC, isaacson at USC-ISI
Message-ID: <[USC-ISI]16-Jan-83 15:51:22.ISAACSON>

In-Reply-To: Your message of this afternoon

It is not that I'm necessarily of a different opinion.  I simply
never thought of it that way and was triggered by your "obstacles
and Roofs" example you gave last night.

It is the apparent coincidence of your (and Minsky's ?) apparent
opinions on super intelligence with, previously unexplained
aspects, of the behaviour of my system that intrigues me.

I can't tell in advance how impressive it's going to be.  I'll
settle for mildly impressive.


∂16-Jan-83  1648	HEWITT @ MIT-OZ 	theories of meaning    
Date: Sunday, 16 January 1983  19:41-EST
From: HEWITT @ MIT-OZ
To:   John McCarthy <JMC @ SU-AI>
Cc:   gavan%mit-oz @ MIT-MC, Hewitt @ MIT-XX, phil-sci%mit-oz @ MIT-MC
Reply-to:  Hewitt at MIT-XX
Subject: theories of meaning
In-reply-to: The message of 16 Jan 1983  11:54-EST from John McCarthy <JMC at SU-AI>

    Date: Sunday, 16 January 1983  11:54-EST
    From: John McCarthy <JMC at SU-AI>
    To:   gavan%mit-oz at MIT-MC
    cc:   phil-sci%mit-oz at MIT-MC
    Re:   correspondence theory of truth   


    	Before Godel, it was possible to believe in a correspondence
    theory and to hope that the truth about every question would
    eventually be determined.  If you don't separate the notion of
    truth from the procedures for determining it, then you find yourself
    unable to accept a proposition as meaningful unless you have
    some advance assurance that it is decidable.  This is contrary to
    common sense practice, which makes conjectures without prejudice
    to being able to settle them.  

I don't believe that your argument quite settles the debate.  Suppose
that the meaning of a sentence is taken to be the (partial) procedures
for establishing or refuting the sentence.  Then there is no requirement
that the sentence be decidable.  Does this theory of meaning have a
standard name?  Any good citations?

    	(Alas for people inclined to constructivist wishful thinking,
    I fear it is going to be necessary to have theories
    that allow discussing propositions without prejudice to whether
    they are meaningful - let alone decidable.

In the above theory of meaning, a sentence would be meaningless if there
were no partial procedures for establishing or refuting the sentence.
Does this cause a problem?

∂16-Jan-83  1706	John McCarthy <JMC@SU-AI> 	theories of meaning    
Date: 16 Jan 83 1657
From: John McCarthy <JMC@SU-AI>
Subject: theories of meaning    
To:   hewitt%mit-xx at MIT-MC, phil-sci%mit-oz at MIT-MC  

Subject: theories of meaning
Replying to: Hewitt's of 1983 Jan 16 19:41est

"I don't believe that your argument quite settles the debate.  Suppose
that the meaning of a sentence is taken to be the (partial) procedures
for establishing or refuting the sentence.  Then there is no requirement
that the sentence be decidable.  Does this theory of meaning have a
standard name?  Any good citations?"

I don't see what the partial procedures would be in either the
case of the continuum hypothesis in mathematics or the question of
whether Napoleon died of arsenic poisoning.  I believe 18th or early
19th century references can be found claiming that the geography of
the far side of the moon and the composition of the sun are both
meaningless, because there were no procedures for determining them.
So I suppose the question of whether another question is meaningful
may be meaningful, but again it may not be clear whether there is
a procedure for deciding whether a question is meaningful.
I don't know about citations.  Try the Encyclopedia of Philosophy.


∂16-Jan-83  1710	DAM @ MIT-MC 	Consensus Theory of Truth 
Date: Sunday, 16 January 1983  19:59-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   Batali @ MIT-OZ
cc:   phil-sci @ MIT-OZ
Subject: Consensus Theory of Truth


	Date: Sunday, 16 January 1983  17:47-EST
	From: BATALI


	"Because it is simpler" is allowed as a valid reason in a scientific
	argument.  Showing that it is indeed simpler will take more argument,
	and those arguments can take many forms, from Solomonov to appeals to
	"elegance."  But the point is that the first justification of the
	theory is simplicity -- which is then itself justified. ...

	I agree that the notion of "simpler" as it is actually used by
scientists is a far from understood notion and that Solomonoff et. al.
certainly do not provide an accurate account of what a "theory" is,
not to say anything about "simpler".  I must admit however that I
am not convinvinced Doyle provides any insight into this issue, though
I must admit I am not very familiar with his stuff.  The EMPIRICAL
study of science is indeed different from mathematics, as are all
empirical studies, and we should expect theories of science to contain
natural kind terms, and notions defined in terms of natural kind terms.

	I don't think (the correspondence theory of truth works fine
	for mathematics).  The correspondence theory of truth says, at bare
	bottom, that the truth of statements depends on the way the world is.
	Mathematical truth is precisely that which does not depend on the
	world at all.  ...

	Well I agree that mathematics is independent of the world and
therefore perhaps that "the corrospondence theory" is simply not
applicable to the notion of mathematical truth.  However I do think
that mathematical truth is best understood by assuming that there is
some mathematical universe (the universe of all sets for example) and
that we have intuative access to truths about this universe.  Of
course these truths (if they are objective and real-world independent)
must be generated by some humanly universal inference mechanism, but
this is a computational reductionist view which I do not consider to
be as useful as the assumption of the existence of a mathematical
universe.

	It seems to me that the world has cows and clouds and atoms and minds
	and everything.  Defining all that away is a big price to pay for
	results that aren't in yet.  And why can't we understand the world in
	terms of cows and clouds?  I think that we do, as people.  So why
	don't we try to understand (as AIers) how we (as people) understand
	the world in those realistic terms?

	In defining the world to be behaviour and sense data I have
not ruled out our UNDERSTANDING it in terms of clouds and cows.  I
agree completely that this is how we do indeed understand it and we as
AI'ers must account for "ontological perception" or perception in
terms of existent objects of certain ontological types.  I can parse a
bit string string as characters, then as words, then as sentences.  Or
a bit string can be interpreted as pointers, then as a graph.  This is
consistent with taking the world ultimately to be just a bit string
which is best understood in terms of clouds and cows.  One can assume
that "the world" is a perceptual bit string without reducing all concepts
to bit strings, just as one can assume that the world is actually a wave
function without thinking of one's childeren that way.

	What are we that someone can take Fourier transforms of us?  How do
	you take the Fourier transform of a cow?  Does it lactate in the
	frequency domain?  Does it moo milk?

	If one ASSUMES that world (including people and cows and all)
is actually a wave function then one can take the Fourier transform of
that function.  But forgetting about Fourier transforms consider any
hypothesis that the world (including us) is actually a mathematical
structure of a certain type and consider any one-to-one transformation
from structures of that type to structures of some other type.  This
one-to-one transfrmation provides a kind of isomorphism between
the two types of worlds and the predictions about perceptions are the
same under either view.  Thus I do not think makes sense to say that
there are "actually" cows.  Rather the notion of cow is useful in
"understanding" our sense data.

	It seems that we pretty much agree, except perhaps you are a
realist and think that there really are cows, well the world makes
more sense if we assume there are ...

	David Mc

∂16-Jan-83  1712	DAM @ MIT-MC 	Occam's Razor   
Date: Sunday, 16 January 1983  20:08-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   Batali @ MIT-OZ
cc:   phil-sci @ MIT-OZ
Subject: Occam's Razor


	Date: Sunday, 16 January 1983  17:47-EST
	From: BATALI

	... Despite the success of the Solomonov approach, it
	might be worthwhile to treat Occam's razor not as an objectively
	defined notion, but rather as a roughly defined high-level goal. ...

	It seems to me that we should treat Occam's razor as a natural
kind (i.e. assume there is a notion of "simpler" used by real-world
people).  This natural kind may turn out to be a precisely definable
rough high level goal.

	David Mc

∂16-Jan-83  1757	John Batali <Batali at MIT-OZ> 	theories of meaning    
Date: Sunday, 16 January 1983, 20:50-EST
From: John Batali <Batali at MIT-OZ>
Subject: theories of meaning
To: Hewitt at MIT-XX, JMC at SU-AI
Cc: gavan%mit-oz at MIT-MC, phil-sci%mit-oz at MIT-MC
In-reply-to: The message of 16 Jan 83 19:41-EST from HEWITT at MIT-OZ

    Date: Sunday, 16 January 1983  19:41-EST
    From: HEWITT @ MIT-OZ

    I don't believe that your argument quite settles the debate.  Suppose
    that the meaning of a sentence is taken to be the (partial) procedures
    for establishing or refuting the sentence.  Then there is no requirement
    that the sentence be decidable.  Does this theory of meaning have a
    standard name?  Any good citations?

    In the above theory of meaning, a sentence would be meaningless if there
    were no partial procedures for establishing or refuting the sentence.
    Does this cause a problem?

This is essentially verificationism.  The standard description of
verificationism is that it takes the meaning of a statement to be the
procedure for verifying it.  Differences have arisen between those who
want the procedure to be a PART of the meaning; those who say that to
know the procedure is to know the meaning and so on.  Verificationism
was born from logical positivism and among those adherents it was fun to
say that a statement was meaningless unless it was analytic of there was
some procedure to verify it.  The most obvious criticism of the approach
is to analyze verificationism's assumption in verificationist terms.
How does one verify the statement "a statement's meaning is (related to)
the procedure for verifying it"?  Or is it analytic?  

But I think that verificationism is a good example of a theory which has
been discarded as an account of science but might nontheless be useful
in AI systems.  As Carl implies, the notion of procedure is much more
developed now than in the positivist's days, and so perhaps there are
some useful results there.  And, in the meta-epistemology spirit of JMC,
we can say that FOR OUR ROBOT, statements have such and such a meaning,
such and such being defined in terms of verification procedures.  Unlike
philosophers of science, we would not have to claim that WE worked that
way.  Thus some of the easier criticisms of verificationism as a
philosophy of science couldn't be brought up against verificationism as
the basis for a meta-epistemological theory.

Citations: The good old Encyclopedia of Philosophy is a good place.  I
think the article is entitled "The Verifiability principle."

∂16-Jan-83  1901	KDF @ MIT-MC 	theories of meaning  
Date: Sunday, 16 January 1983  21:53-EST
Sender: KDF @ MIT-OZ
From: KDF @ MIT-MC
To:   HEWITT @ MIT-OZ <Hewitt @ MIT-XX>
Cc:   gavan%mit-oz @ MIT-MC, John McCarthy <JMC @ SU-AI>,
      phil-sci%mit-oz @ MIT-MC
Subject: theories of meaning
In-reply-to: The message of 16 Jan 1983  19:41-EST from HEWITT at MIT-OZ <Hewitt at MIT-XX>

	I belive what you were discussing ("meaning being determined by
the (partial) procedures for establishing or refuting the sentence")
is what Bill Woods calls "Procedural Semantics".  Bill has reports
availible.  

∂17-Jan-83  0105	John McCarthy <JMC@SU-AI> 	verificationism        
Date: 17 Jan 83 0051
From: John McCarthy <JMC@SU-AI>
Subject: verificationism    
To:   phil-sci at MIT-OZ    

I think verificationism is no good for AI for the same reason as it
is no good for science.  Taken seriously, it limits thought.  Consider
the hypothesis that matter is composed of atoms.  During most of the
nineteenth century, no-one could think of any way of verifying it,
and towards the end of the century, some famous chemist of a positivist
frame of mind, Ostwald perhaps, emphasized that it was just a means
of keeping track of some phenomena, e.g. the law of combining proportions.
Shortly thereafter, it was verified and Avogadro's number was computed
in a variety of ways.  The most spectacular way was that individual
scintillations from radioactive decay were observed, a rate of decay
in atoms per second computed and compared with a rate of decay in
(say) micrograms per year of a large sample.
	If AI systems were programmed to consider only propositions
when the could generate a means of verification, they would be
extremely unimaginative.
	I must confess, however, that my real objection is different.
If a universe exists and evolves intelligence, why should it only
evolve intelligences that can observe every feature of the universe?
I can't imagine a mechanism of evolution that would guarantee this.


∂17-Jan-83  0108	KDF @ MIT-MC 	Truth-Theoretic Semantics different from Message Passing Semantics
Date: Monday, 17 January 1983  04:08-EST
Sender: KDF @ MIT-OZ
From: KDF @ MIT-MC
To:   GAVAN @ MIT-OZ
Cc:   DAM @ MIT-OZ, Hewitt @ MIT-OZ, HEWITT @ MIT-OZ <Hewitt @ MIT-XX>,
      phil-sci @ MIT-OZ
Subject: Truth-Theoretic Semantics different from Message Passing Semantics
In-reply-to: The message of 16 Jan 1983  08:12-EST from GAVAN

 
    How can you explain how they come to their conclusions without
    explaining how they came to their premises?  

You can't, and that is the point.  The "community" metapor doesn't say
ANYTHING about the structure of the individual minds in it - how they
get their premises, their conclusions, etc.  At best it is a guide to
organizing communication between modules, but that isn't too interesting
until you know what the modules can do.  Early vision was a great example
of this - evidence of module boundaries was ignored, heterarchial theories
were generated, and the approach sank into a quagmire.  It is important to
understand not just how to compute something, but what to compute and why
(Marr party line..).  This is NOT to say that "community" metaphors won't
tell you anything - Marvin is surely right in claiming that a large part
of a theory of intelligence will be architectural.  But unless we understand
what "modules" might exist, then our theorizing about communication will
probably be underconstrained.  

∂17-Jan-83  0202	GAVAN @ MIT-MC 	Consensus Theory of Truth    
Date: Monday, 17 January 1983  04:59-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   DAM @ MIT-OZ
Cc:   JCM @ SU-AI, phil-sci @ MIT-OZ
Subject: Consensus Theory of Truth
In-reply-to: The message of 16 Jan 1983  16:59-EST from DAM

    Date: Sunday, 16 January 1983  16:59-EST
    From: DAM
    Sender: DAM
    To:   phil-sci
    cc:   GAVAN, JCM at SU-AI
    Re:   Consensus Theory of Truth

    	I agree with McCarthy that the more precise we can make this discussion
    the better (although we should not let the desire for precision control
    our choice of subject matter, and in the absence of "good" precise
    models we should feel free to express intuitions).  

When we are discussing issues like "how do agents in societies and
agents in scientific communities actually go about doing what they
do?" there is no way we can be "precise."  Moreover, the desire for
precision often has the side-effect of excluding from the discussion
anyone who isn't already playing the same language game you're
playing.  Marvin seems to be able to express mathematical concepts in
a way that is comprehensible to the non-mathematician.  Why can't you?

    Lakatos's objections
    to simplism (that it leads to subjectivism) are unfounded when one looks
    at the mathematical details of Solomonoff et. al.'s theory.  

I basically agree with this, although Duhem's simplism (what Lakatos
and Popper object to) may not be equivalent to simplicity in
Solomonoff's sense.  

    If one theory
    is simpler than another it is objectively simpler, the notion of "simpler"
    has a precise (totally defined) meaning in this work.  It is important to
    understand mathematical detail when mathematical models are around.

It seems to me that there are two definitions of "simpler" in
Solomonoff.  One has to do with the probability of a given string
given the experience of other strings, and the other has to do with
the probability of strings for which there is neither experience nor a
theory.  The first aspect is analogous (not equivalent) to what
Lakatos calls "background theories" and the second aspect os analogous
to some theory under discussion.  In the case of Galileo, the latter
is represented by his astronomical theory and the former is
represented by his optical theory.  Since Galileo's contemporaries did
not accept the optical theory (it had a low probability) they could
not accept the astronomical theory either, despite its internal
simplicity.  Now if this is approximately (imprecisely) what is meant
by Solomonoff's theory, then I don't see what it adds to Lakatos other
than mathematical terminology.  This mathematical terminology may make
the theory more precise and comprehensible for you, but it obfuscates
it for everyone else.

    	I think that the corrosponence theory of truth works fine for
    mathematical truth (and this probably why its adherents are largely
    mathematicians).  However where empirical truth is concerned it
    seems to me that one is best off DEFINING the world to be behaviour
    and sense data (this is not a mathematical definition).  

If it only holds in the imagined universes of mathematicians, then I
can't understand how anyone can defend the correspondence theory of
truth.  I believe you've left out an important component.  The world
is more than just belief (sense data) and behavior (including
linguistic behavior and reference), but also WHAT WE DESIRE IT TO BE.

    We can never get that god's eye view anyway.  

Right.  That's why the correspondence theory of truth (and dogmatic
realism as a metaphysical position) is incoherent.

    Of course theories can hypothesize
    that the world is a certain way (points moving under roofs) and there
    may be some such theory which can never be improved upon (the ultimate
    unified field theory).

Of course theories and hypotheses are dependent at least in part upon what
the theoretician's or scientist's actual real-world interests are.  This
would include all sorts of emotional and socially-determined factors.

    	Consider the following question concerning the corrospondence
    theory.  Consider a world which is the spacial Fourier transform of our's.
    In this world are beings which are Fourier transforms of us.  Furthermore
    this transform corrospondence holds for all time.  Is there any sense in
    which "our" world is different from "that" world?  Are worlds which in
    principle produce the same sense data really different worlds?

What does this question have to do with anything at all?  

∂17-Jan-83  0216	GAVAN @ MIT-MC 	Occam's Razor 
Date: Monday, 17 January 1983  05:15-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   DAM @ MIT-OZ
Cc:   Batali @ MIT-OZ, phil-sci @ MIT-OZ
Subject: Occam's Razor
In-reply-to: The message of 16 Jan 1983  20:08-EST from DAM

    Date: Sunday, 16 January 1983  20:08-EST
    From: DAM

    	Date: Sunday, 16 January 1983  17:47-EST
    	From: BATALI

    	... Despite the success of the Solomonov approach, it
    	might be worthwhile to treat Occam's razor not as an objectively
    	defined notion, but rather as a roughly defined high-level goal. ...

    	It seems to me that we should treat Occam's razor as a natural
    kind (i.e. assume there is a notion of "simpler" used by real-world
    people).  This natural kind may turn out to be a precisely definable
    rough high level goal.

The problem here is that no natural kind term may be precisely defined
without some sort of universal consensus.  Different people have
different meanings for many natural kind terms (like "simple"), which
is to say that their extensions for such terms are not equivalent.
You can define "simple" in some precise way and make the assumption
that your precise definition is what "simple" means, but people who
dislike your assumption (or perhaps even your theory) can "simply"
dispute your definition.

∂17-Jan-83  0234	GAVAN @ MIT-MC 	A note on coherence
Date: Monday, 17 January 1983  05:31-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   ISAACSON @ USC-ISI
Cc:   phil-sci @ MIT-MC
Subject: A note on coherence
In-reply-to: The message of 16 Jan 1983  11:12-EST from ISAACSON at USC-ISI

    Date: Sunday, 16 January 1983  11:12-EST
    From: ISAACSON at USC-ISI

    Without pretending to know either Marvinese or Gavanese [for I'm
    lucky to know some English on top of my native Hebrew], I wish to
    throw in this comment.

    To my mind's eye, coherent ideas are both simple AND beautiful [I
    think this is also Peirce's position] and tend to promote
    coherence of beliefs within a given society.

Well, beauty is relative across cultures.  Is truth also relative
across cultures?

    Perhaps it is possible to associate some qualified notions of
    "simplicity" and "beauty" [from the point of view of a given
    society-of-minds subculture] with a notion of "coherence" which
    is DUAL in a certain sense.

    That is, the coherence of ideas generated by a given society is a
    reflection of and reflected in the societal coherence (and its
    belief system), and vice versa.  [the last phrase me be redundant
    ...]

Wouldn't this bring us back to a consensus theory of truth?  JMC
thinks this is "muddled" and "scientifically unpromising."

    As an afterthought, that is, perhaps, why people talk
    figuratively about "adherents" of theories; "adherence" being a
    rough synonym of "coherence".

No. An adherent of a theory is someone who believes in it.  The
coherence theory is posited at the level of the individual knower.
Saying that a belief is coherent is equivalent to saying that the
belief makes the network of the knower's knowledge more coherent.  The
problem with "adherence" is that even a "kludge" or a
"special-case-hack" could adhere to such a network, but it wouldn't
necessarily make the network itself be more coherent.

∂17-Jan-83  0250	philosophy-of-science-request@MIT-MC 	List Info   
Date: Monday, 17 January 1983, 05:46-EST
From: philosophy-of-science-request@MIT-MC
Sender: JCMa@MIT-OZ at MIT-MC
Subject: List Info
To: phil-sci@MIT-OZ at MIT-MC

For those who don't know, the archive for the discussion is in:

	   OZ:SRC:<COMMON>PHILOSOPHY-OF-SCIENCE-ARCHIVES.TXT

Requests for additions to or deletions from the list should be
sent to:
		    PHILOSOPHY-OF-SCIENCE-REQUEST@MC

Oz users can send to the same @OZ.  Meta-questions about the list should
also be directed to the -request address.

∂17-Jan-83  0251	GAVAN @ MIT-MC 	Truth-Theoretic Semantics different from Message Passing Semantics   
Date: Monday, 17 January 1983  05:47-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   KDF @ MIT-OZ
Cc:   DAM @ MIT-OZ, Hewitt @ MIT-OZ, HEWITT @ MIT-OZ <Hewitt @ MIT-XX>,
      phil-sci @ MIT-OZ
Subject: Truth-Theoretic Semantics different from Message Passing Semantics
In-reply-to: The message of 17 Jan 1983  04:08-EST from KDF

    Date: Monday, 17 January 1983  04:08-EST
    From: KDF
    Sender: KDF
     
        How can you explain how they come to their conclusions without
        explaining how they came to their premises?  

    You can't, and that is the point.  The "community" metapor doesn't say
    ANYTHING about the structure of the individual minds in it - how they
    get their premises, their conclusions, etc.  

Oh, I think you're wrong.  Our ability to engage in communication
within a community of language users (and even specialized-language
users) is a very important factor in determining both our premises and
our conclusions.  Try being a hermit for 20 years and see what you
think then.  Also, see the literature on the Whorfian hypothesis.

I think you're at least partially objecting to the utility of
metaphors in theorizing.  If so, see Anatol Rapaport on models and
metaphors in *Operational Philosophy* (this is not an endorsement of
everything in that book), and Charles Sanders Peirce on abductive
inference.  This is what I was referring to (implicitly) in my earlier
message.

    At best it is a guide to
    organizing communication between modules, but that isn't too interesting
    until you know what the modules can do.  

But how can you know what the modules can do before knowing how they
communicate?

    Early vision was a great example
    of this - evidence of module boundaries was ignored, heterarchial theories
    were generated, and the approach sank into a quagmire.  It is important to
    understand not just how to compute something, but what to compute and why
    (Marr party line..).  This is NOT to say that "community" metaphors won't
    tell you anything - Marvin is surely right in claiming that a large part
    of a theory of intelligence will be architectural.  But unless we understand
    what "modules" might exist, then our theorizing about communication will
    probably be underconstrained.  

I agree.  But our theorization about what modules might exist will
probably be underconstrained if we don't understand how they
communicate.  Why not do both?

∂17-Jan-83  0715	BATALI @ MIT-MC 	verificationism        
Date: Monday, 17 January 1983  10:12-EST
Sender: BATALI @ MIT-OZ
From: BATALI @ MIT-MC
To:   John McCarthy <JMC @ SU-AI>
Cc:   phil-sci @ MIT-OZ
Subject: verificationism    
In-reply-to: The message of 17 Jan 1983  00:51-EST from John McCarthy <JMC at SU-AI>

I don't believe verificationism either and I certainly don't think
that unverifiable propositions are meaningless.  I suggested that
verificationism might be useful in AI for its idea that the meanings
of things are related to the procedures for verifying them.  One could
take this further (and beyond verificationism) by arguing that the
meaning of somethine depends on its (functional) relations with
everything else.  And verification is one kind of relation.  The value
of the verificationist approach is certainly not its top-level
positions -- instead perhaps we could find useful some of the insights
picked up as theory developed and ultimately crapped out.

As I said before, the reason why this might be true is the very fact
that the verificationists placed such importance in PROCEDURES.

∂17-Jan-83  0746	BATALI @ MIT-MC 	Consensus Theory of Truth   
Date: Monday, 17 January 1983  10:38-EST
Sender: BATALI @ MIT-OZ
From: BATALI @ MIT-MC
To:   DAM @ MIT-OZ
Cc:   phil-sci @ MIT-OZ
Subject: Consensus Theory of Truth
In-reply-to: The message of 16 Jan 1983  19:59-EST from DAM

    Date: Sunday, 16 January 1983  19:59-EST
    From: DAM

    	In defining the world to be behaviour and sense data I have
    not ruled out our UNDERSTANDING it in terms of clouds and cows.  I
    agree completely that this is how we do indeed understand it and we as
    AI'ers must account for "ontological perception" or perception in
    terms of existent objects of certain ontological types.  I can parse a
    bit string string as characters, then as words, then as sentences.  Or
    a bit string can be interpreted as pointers, then as a graph.  This is
    consistent with taking the world ultimately to be just a bit string
    which is best understood in terms of clouds and cows.  One can assume
    that "the world" is a perceptual bit string without reducing all concepts
    to bit strings, just as one can assume that the world is actually a wave
    function without thinking of one's childeren that way.

I've gotta feeling that when you say "the world" you mean something
like "the environment" in thermodynamics, that is: everything outside
the system of interest. Yes indeed: we could say that all a mind has
access to is sense data, and that can be represented as a bit-string.
But why can't we say that a mind has access to real objects, which are
represented by sense data?

The difference in approach, I think is the difference between a
research program that tries to come up with algorithms for parsing bit
strings versus one that tries to use bit strings to find out about
cows.  The second approach would use results from the first but must
go beyond and use knowledge about the real world.

    	If one ASSUMES that world (including people and cows and all)
    is actually a wave function then one can take the Fourier transform of
    that function.  But forgetting about Fourier transforms consider any
    hypothesis that the world (including us) is actually a mathematical
    structure of a certain type and consider any one-to-one transformation
    from structures of that type to structures of some other type.  This
    one-to-one transfrmation provides a kind of isomorphism between
    the two types of worlds and the predictions about perceptions are the
    same under either view.  Thus I do not think makes sense to say that
    there are "actually" cows.  Rather the notion of cow is useful in
    "understanding" our sense data.

What has one gained by assuming that the world is "actually" a
particular mathematical formulation when there are such
transformations to other formulations?  How, for example, can you know
that the transformation preserves everything important?  That is to
say: what has been preserved when the transformation has been done?
Is it the sense-data bit stream?  But wouldn't that be transformed
also?  Where do we choose to seperate the "world" in your view from
the mind?  It would have to be seperated at the point where things
don't change when the mathematical transform occurs.  My claim is that
the only way to tell if some transform hasn't changed anything
important is if the REAL PHYSICAL OBJECTS in the world are unchanged.

∂17-Jan-83  0805	GAVAN @ MIT-MC 	verificationism and correspondence
Date: Monday, 17 January 1983  11:03-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   John McCarthy <JMC @ SU-AI>
Cc:   phil-sci @ MIT-OZ
Subject: verificationism and correspondence
In-reply-to: The message of 17 Jan 1983  00:51-EST from John McCarthy <JMC at SU-AI>

    Date: Monday, 17 January 1983  00:51-EST
    From: John McCarthy <JMC at SU-AI>

    . . .

    If a universe exists and evolves intelligence, why should it only
    evolve intelligences that can observe every feature of the universe?
    I can't imagine a mechanism of evolution that would guarantee this.

Subject: verificationism and correspondence
In-reply-to: GAVAN of 1983 jan 17 0805EST
"Isn't this also an argument against the correspondence theory?"
You'll have to elaborate this a bit before I can respond.
∂17-Jan-83  1240	DAM @ MIT-MC 	Consensus Theory of Truth 
Date: Monday, 17 January 1983  15:33-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   Batali @ MIT-OZ
cc:   phil-sci @ MIT-OZ
Subject: Consensus Theory of Truth


	Date: Monday, 17 January 1983  10:38-EST
	From: BATALI

	....
	But why can't we say that a mind has access to real objects, which are
	represented by sense data?  ...

What do you mean by "access to"?

	...
	What has one gained by assuming that the world is "actually" a
	particular mathematical formulation when there are such
	transformations to other formulations? ...

	You are the one who insists on thinking that there is some
"actual" world behind the sense data.  Unfortunately for ANY class of
mathematical objects one can define a totally information preserving
transformation from objects in that class to objects in a different
class.  The Fourier transform is one example.  Suppose the real world
was a collection of finite sets and we could sense set inclusion
relations.  There are lots of possible universes (such as finite
LISTS) which would be indestinguishable from the universe of finite
sets (an appropriate definition of inclusion would have to be given
for lists).  If all one can sense is the inclusion relation what sense
does it make to say that the world is sets and not lists.
	This phenomonon is not just a pathology of some theories about
what the world is.  It is a necessary state of affair for any such
theory, at least if the theory can be discussed in a precise way, i.e.
is a precise theory.  Not all theories are equivalent however.  The
theory of sets is different from a theory which allows the inclusion
relation to be non-transitive.

	David Mc

∂17-Jan-83  1322	John McCarthy <JMC@SU-AI> 	verificationism and correspondence    
Date: 17 Jan 83 1144
From: John McCarthy <JMC@SU-AI>
Subject: verificationism and correspondence    
To:   gavan at MIT-MC, phil-sci at MIT-OZ  

Subject: verificationism and correspondence
In-reply-to: GAVAN of 1983 jan 17 0805EST
"Isn't this also an argument against the correspondence theory?"
You'll have to elaborate this a bit before I can respond.


∂17-Jan-83  1322	DAM @ MIT-MC 	Solomonoff et. al.   
Date: Monday, 17 January 1983  14:49-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   Gavan @ MIT-OZ
cc:   phil-sci @ MIT-OZ
Subject: Solomonoff et. al.


	Date: Monday, 17 January 1983  04:59-EST
	From: GAVAN

	...
	It seems to me that there are two definitions of "simpler" in
	Solomonoff.  One has to do with the probability of a given string
	given the experience of other strings, and the other has to do with
	the probability of strings for which there is neither experience nor a
	theory.
	...
	Now if this is approximately (imprecisely) what is meant
	by Solomonoff's theory, then I don't see what it adds to Lakatos other
	than mathematical terminology.  This mathematical terminology may make
	the theory more precise and comprehensible for you, but it obfuscates
	it for everyone else.

	It appears that I have indeed failed to communicate the
nature of the Solomonoff et. al. theory.  There is a
difference between "popularizing" a theory such that readers with no technical
background think they understand it, and writing with the intention
of communicating technical details.  McCarthy has attempted to communicate
some of the technical details but these details are necessarily unintelligible
to people with impoverished technical backgrounds.
	What the Solomonoff et. al. theory says is not open to debate,
this is the character of precise mathematical theories.  The formation
of a precise mathematical theory is not simply the addition of
obfuscating terminology, as many people who can't understand detail would
like to think.  There is only one notion of "simpicity" in the Solomonoff
et. al. theory, and there is no notion of probability.

	David Mc

∂17-Jan-83  1408	GAVAN @ MIT-MC 	Solomonoff et. al. 
Date: Monday, 17 January 1983  16:19-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   DAM @ MIT-OZ
Cc:   phil-sci @ MIT-OZ
Subject: Solomonoff et. al.
In-reply-to: The message of 17 Jan 1983  14:49-EST from DAM

    Date: Monday, 17 January 1983  14:49-EST
    From: DAM

    	It appears that I have indeed failed to communicate the
    nature of the Solomonoff et. al. theory.  There is a difference
    between "popularizing" a theory such that readers with no technical
    background think they understand it, and writing with the intention
    of communicating technical details.  McCarthy has attempted to communicate
    some of the technical details but these details are necessarily 
    unintelligible to people with impoverished technical backgrounds.

    	What the Solomonoff et. al. theory says is not open to debate,
    this is the character of precise mathematical theories.  

Well, I'm certainly not debating Solomonoff's theory, but this
statement certainly sounds wrong to me.  Do you mean to say that no
precise mathematical theory has ever been the subject of debate?
Haven't any mathematical theories ever been subsequently proved wrong?

    The formation of a precise mathematical theory is not simply the
    addition of obfuscating terminology, as many people who can't
    understand detail would like to think.

I didn't say that the theory was simply the addition of obfuscating
terminology.  I just wondered how it differs SUBSTANTIVELY from
Lakatos' theory.  Sure, I was baiting you when I called it
obfuscating.  I had hoped to get a response which stated explicitly
(even if imprecisely) what the theory holds.  If any theory can't be
explained to scientists in general, it can't be expected it to be
accepted generally as "truth."  That's the nature of the consensus
theory.  Marvin tries to do this, but you and JMC don't.  From
Marvin's explanation and from what I've been able to extract from
Solomonoff, there doesn't seem to be much difference between
Solomonoff and Lakatos.  Maybe there is.  I'd like to know.

    There is only one notion of "simpicity" in the Solomonoff
    et. al. theory, and there is no notion of probability.

Can you explain that notion of simplicity in a way that is accessible
to, say, the random doctoral student at MIT whose area of inquiry is
not mathematics, but who is at the same time not completely
mathematically illiterate?  Can you explain in the English language
how it differs from Lakatos' theory, which I've already explained on
this list?

∂17-Jan-83  1440	John McCarthy <JMC@SU-AI> 	Lakatos and Solomonoff      
Date: 17 Jan 83 1423
From: John McCarthy <JMC@SU-AI>
Subject: Lakatos and Solomonoff  
To:   gavan@MIT-OZ
CC:   dam@MIT-OZ, phil-sci@MIT-OZ

Lakatos, if Proofs and Refutation is the book in question is concerned
with the social process whereby the mathematical community comes to
accept a theory.  Perhaps it also supposes that the meaning of a theorem
and its truth is also socially determined.

What the Chaitin version (the one I have read about in Scientific
American) concerns is a notion of simplicity of sequences and functions
using the shortest program that generates the sequence.  As far as I
can see neither theory is best explained as a variant of the other
or in contrast with the other.  Each is best explained starting from
scratch.  The surprising and impressive thing about the Solomonoff,
et al theory is that so many of the results are independent of the
programming language chosen, and, in fact, Chaitin doesn't bother
to use a specific language.  The theory isn't very difficult, and
the Scientific American article, a few years back, and the references
it gives are an excellent source.


∂17-Jan-83  1447	GAVAN @ MIT-MC 	correspondence theory of truth    
Date: Monday, 17 January 1983  17:36-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   John McCarthy <JMC @ SU-AI>
Cc:   phil-sci%mit-oz @ MIT-MC
Subject: correspondence theory of truth   
In-reply-to: The message of 16 Jan 1983  11:54-EST from John McCarthy <JMC at SU-AI>

    Date: Sunday, 16 January 1983  11:54-EST
    From: John McCarthy <JMC at SU-AI>
    To:   gavan%mit-oz at MIT-MC
    cc:   phil-sci%mit-oz at MIT-MC

    	Before Godel, it was possible to believe in a correspondence
    theory and to hope that the truth about every question would
    eventually be determined.  If you don't separate the notion of
    truth from the procedures for determining it, then you find yourself
    unable to accept a proposition as meaningful unless you have
    some advance assurance that it is decidable.  This is contrary to
    common sense practice, which makes conjectures without prejudice
    to being able to settle them.  

I can agree with this.  Statements about the existence of God,
although unprovable, are by no means meaningless.  I didn't mean to
give you the impression that your meta-epistemology idea was
meaningless.  I just wanted to point out that it was an article of
faith, that's all.

    . . .

    The consequence of all this, I conjecture, will be that the set
    of meaningful proppositions of set theory will not be r.e., i.e.
    cannot be specified in a single theory.  This is not what one
    would like to be true, but when one accepts a correspondence
    theory of truth, one cannot expect that everything will turn
    out in accordance with one's preferences).

If the consensus theory were "true" at the individual level of analysis
and if you were powerful enough to force a consensus in accord with your
preferences, then everything would turn out the way you like.  That's
irrationalism and I oppose it too.  But you haven't yet argued in favor
of the correspondence theory against the coherence theory.  

    As some famous scientist
    put it: "The universe is not only queerer than we know; it is
    queerer than we can know".

Assuming of course that the universe "out there" is anything more than
what we know about it.  After all, our only access to it are the
images we have in our heads.  Of course, we wouldn't have these images
unless we existed in the universe "out there."  It seems to me that
ontology is subsumed by epistemology and that epistemology is subsumed
by ontology.  Yet it also seems apparent that the correspondence
theory requires a dualist metaphysics.  That's my problem with it.
I'm an internalist.


    For example,
    it has been shown that Conway's life universe admits universal
    computers that can reproduce: (all M.I.T. rumor; I don't have a
    reference).  We can imagine a physicist program in the life world,
    and can ask what epistemological mechanisms (i.e. research strategies)
    if any would permit a life world physicist to determine that the
    fundamental physics of his world was that of a cellular automaton and
    Conway's automaton in particular.  Besides doing examples of varying
    complexity, we can develop mathematical theories relating properties
    of the world and the subsystems regarded as scientists to what facts
    about the world these scientists can determine.  

I am friendly to this idea, but I'm sceptical of its "precise"
formalizibility (perhaps you don't intend to be precise).  Properties
of the world can only be known by some knower.  Now I assume that some
scientist will come up with the theories relating properties of the
world to other scientists' theories.  So aren't you just comparing
theories?  What's your base-line?

It's unclear whether "properties of the world" can be objectively
modeled using extensional techniques.  

    After developing
    such theories, we can try to apply them to our own world.  My
    expectation would be that it would be discovered that consensus based
    strategies would either be impossible to define rigorously or would
    rarely work or would be unnecessarily complicated and simplify down
    to correspondence based strategies.

You can't say that a theory is wrong simply because it can't be
defined rigorously.  Extensional techniques often fail at the social
level of analysis not only because of the complexity of social
phenomena, but also because two people can have (a) the same
intensions with differing extensions, and (b) differing intensions for
the same extension.  In the final analysis, meaning and truth are
determined socially, in discourse, which is often imprecise.  Also,
the consensus theory and the correspondence theory are by no means
mutually exclusive.  They are theories of truth at different levels of
analysis.  Truth could be a consensus about a correspondence.  The
correspondence theory and the coherence theory are, however, mutually
exclusive.  Both are posited at the individual level of analysis.  I'm
arguing that ascriptions of truth to a theory are statements about the
level of consensus about its coherence in the existing body of
knowledge within some linguistic community.  You seem to be arguing that
ascriptions of truth to a theory are statements about the degree of
correspondence between the theory and something "out there" in the
universe.  Is that right?  If so, could you state why you believe the
latter as opposed to the former?

    Finally, my remarks about "muddled" and "scientifically unpromising"
    were intended to suggest that it is possible and necessary to do
    a lot better than this discussion has been doing.  Perhaps I'm
    mistaken in this opinion of much of the discussion.

You made those remarks only a day after being added to the list.  What
you read you took out of context.  We were discussing science, and
theories of truth, at a social level of analysis for pragmatic reasons
relating to Carl Hewitt's research interests.  

What do you mean by "a lot better"?  What's your theory of the good?

    	There is an eight page article entitled "The correspondence theory
    of truth" in Edwards's "Encyclopedia of Philosophy".  It begins with
    Plato, includes the Stoics, and in modern times includes G. E. Moore,
    Russell (who coined the term) and Wittgenstein.  It ends with Tarski.
    I have to confess I didn't get much out of the article, which is by
    A. N. Prior, the developer of tense logic.  To put it in hacker terms:
    "The program has so many bugs and such bad comments, that it
    seems better to chuck it and start over"  - except of course for Tarski's
    technical results.  Others may have better luck with it, and there is
    a bibliography.  The Encyclopedia is generally excellent and is sometimes
    available as Book-of-the-Month premium for joining - worth it if you
    are strong minded enough to subsequently buy the bare minimum of four
    books.

I know about the Encyclopedia.  It's helpful, but of course it's no
substitute for the primary source material.  An interesting critique
of the correspondence theory and explication of internalism may be
found in Hilary Putnam's recent *Reason, Truth, and History.*

∂17-Jan-83  1512	BATALI @ MIT-MC 	correspondence theory of truth   
Date: Monday, 17 January 1983  18:04-EST
Sender: BATALI @ MIT-OZ
From: BATALI @ MIT-MC
To:   GAVAN @ MIT-OZ
Cc:   John McCarthy <JMC @ SU-AI>, phil-sci%mit-oz @ MIT-MC
Subject: correspondence theory of truth   
In-reply-to: The message of 17 Jan 1983  17:36-EST from GAVAN

    Date: Monday, 17 January 1983  17:36-EST
    From: GAVAN
  
    Yet it also seems apparent that the correspondence
    theory requires a dualist metaphysics.  That's my problem with it.
    I'm an internalist.

If this is the main problem with the correspondence theory, it would
be nice to see an explanation of how the a dualistic metaphysics is
required.

∂17-Jan-83  1518	BATALI @ MIT-MC 	Consensus Theory of Truth   
Date: Monday, 17 January 1983  18:12-EST
Sender: BATALI @ MIT-OZ
From: BATALI @ MIT-MC
To:   DAM @ MIT-OZ
Cc:   phil-sci @ MIT-OZ
Subject: Consensus Theory of Truth
In-reply-to: The message of 17 Jan 1983  15:33-EST from DAM

    From: DAM

        From: BATALI
    	....
    	But why can't we say that a mind has access to real objects, which are
    	represented by sense data?  ...

    What do you mean by "access to"?

The ability to be causally affected by; the ability to causally affect.

    	You are the one who insists on thinking that there is some
    "actual" world behind the sense data.

Sorry. Can't help it.

    Unfortunately for ANY class of
    mathematical objects one can define a totally information preserving
    transformation from objects in that class to objects in a different
    class.  The Fourier transform is one example.  Suppose the real world
    was a collection of finite sets and we could sense set inclusion
    relations.  There are lots of possible universes (such as finite
    LISTS) which would be indestinguishable from the universe of finite
    sets (an appropriate definition of inclusion would have to be given
    for lists).  If all one can sense is the inclusion relation what sense
    does it make to say that the world is sets and not lists.
    	This phenomonon is not just a pathology of some theories about
    what the world is.  It is a necessary state of affair for any such
    theory, at least if the theory can be discussed in a precise way, i.e.
    is a precise theory.  Not all theories are equivalent however.  The
    theory of sets is different from a theory which allows the inclusion
    relation to be non-transitive.

I'm confused.  Are you claiming that the world is necessarily some
mathematical object?  Or that the world can be described using
mathematics?  Do you think that there is an important difference?

∂17-Jan-83  1821	ISAACSON at USC-ISI 	Non-technical Chaitin's papers    
Date: 17 Jan 1983 1717-PST
Sender: ISAACSON at USC-ISI
Subject: Non-technical Chaitin's papers
From: ISAACSON at USC-ISI
To: GAVAN at MIT-MC
Cc: phil-sci at MIT-MC, isaacson at USC-ISI
Message-ID: <[USC-ISI]17-Jan-83 17:17:48.ISAACSON>

The Scientific American paper JMC cited is:

Randomness and Mathematical Proof by Gregory J. Chaitin, Sci.
Am., May 1975, pp.  47-52.

He published another nontechnical paper in 1974 -

Information-Theoretic Computational Complexity (Invited Paper),
IEEE Trans.  InfoTheory, Vol.  IT-20, No.  1, Jan 1974, pp.  10 -
15.


∂17-Jan-83  2149	Carl Hewitt <Hewitt at MIT-OZ at MIT-MC> 	verificationism        
Date: Tuesday, 18 January 1983, 00:43-EST
From: Carl Hewitt <Hewitt at MIT-OZ at MIT-MC>
Subject: verificationism    
To: John McCarthy <JMC at SU-AI>
Cc: phil-sci at MIT-OZ at MIT-MC, Hewitt at MIT-OZ at MIT-MC
In-reply-to: The message of 17 Jan 83 00:51-EST from John McCarthy <JMC at SU-AI>

    Received: from MIT-MC.ARPA by MIT-XX.ARPA with TCP; Mon 17 Jan 83 04:07:26-EST
    Date: 17 Jan 83 0051
    From: John McCarthy <JMC@SU-AI>
    Subject: verificationism    
    To:   phil-sci at MIT-OZ    

    I think verificationism is no good for AI for the same reason as it
    is no good for science.  Taken seriously, it limits thought.  Consider
    the hypothesis that matter is composed of atoms.  During most of the
    nineteenth century, no-one could think of any way of verifying it,
    and towards the end of the century, some famous chemist of a positivist
    frame of mind, Ostwald perhaps, emphasized that it was just a means
    of keeping track of some phenomena, e.g. the law of combining proportions.

One interpretation of this is that he was complaining that there
were very few known procedures for helping to establish or refute
the hypothesis.
 
    Shortly thereafter, it was verified and Avogadro's number was computed
    in a variety of ways.

New procedures were discovered.

    The most spectacular way was that individual scintillations from
    radioactive decay were observed, a rate of decay
    in atoms per second computed and compared with a rate of decay in
    (say) micrograms per year of a large sample.

They provide positive evidence.

            If AI systems were programmed to consider only propositions
    when the could generate a means of verification, they would be
    extremely unimaginative.

I agree, using the criteria that there are only a few known procedures
for establishing or refuting a hypothesis is a poor filter.  However
adding significant new procedures to the set is oftern quite valuable. 

∂17-Jan-83  2216	Carl Hewitt <Hewitt at MIT-OZ at MIT-MC> 	Objectivity of Mathematics  
Date: Tuesday, 18 January 1983, 01:02-EST
From: Carl Hewitt <Hewitt at MIT-OZ at MIT-MC>
Subject: Objectivity of Mathematics
To: DAM at MIT-MC
Cc: Hewitt at MIT-OZ at MIT-MC, phil-sci at MIT-OZ at MIT-MC,
    Hewitt at MIT-OZ at MIT-MC
In-reply-to: The message of 16 Jan 83 16:10-EST from DAM at MIT-MC

    Date: Sunday, 16 January 1983  16:10-EST
    From: DAM at MIT-MC
    Sender: DAM at MIT-OZ
    To:   Hewitt at MIT-OZ
    cc:   phil-sci at MIT-OZ
    Re:   Objectivity of Mathematics
    Received: from MIT-MC.ARPA by MIT-XX.ARPA with TCP; Sun 16 Jan 83 16:17:25-EST

            Date: Sunday, 16 January 1983  01:43-EST
            From: HEWITT at MIT-OZ <Hewitt at MIT-XX>

            ...
            For a long time I had an intuition based on programming in LISP that
            "recursion is more powerful than iteration".  This intuition flew in
            the face of Minsky's well known theorem that a simple iterative two
            counter program is universal.  After reading a paper by Luckham and
            Patarson, I was able to distill a notion of recursion and then find an
            example of a recursive program that could not be programmed
            iteratively.  Mike then conceived of a beautiful proof technique to
            establish the result. ...

            I do not want anyone to get the idea that you refuted Minsky's
    established theorem, you and Mike simply found an alternative definition
    for what it means for two computaional systems to be "equivalent".  Of
    course I agree that finding useful definitions is very important but
    this is not an example supporting Lakatos's claims.

            David Mc

Actually from the point of few of the mathematical community at the
time, we did succeed in refuting an important aspect of Minsky's
established theorem.  The result we established went against the
conventional wisdom of the time.  I got into a lot of arguments with
colleagues at the time.  Gradually our result came to be accepted by the
community. 

I find the phenomena by which consensus is arrived at and the earlier
controversy "papered over" to be extremely interesting.  Marvin
long ago observed a similar phenomena with respect to machines
which do tasks that appear "intelligent".  When the mechanisms
by the which the program works are explained the feeling of intelligence
often goes away.

A more contemporaneous case has arisen from the claim that actor systems
can perform computations which cannot be performed by a nondeterministic
Turing Machine.  Currently the claim is extremely controversial and I
can get the rebuttal argument that the claim is false since it violates
Church's Thesis by simply walking down the hall to where our
"theorists" hang out.  The future will tell how the community decides
to deal with this one.

∂17-Jan-83  2239	John McCarthy <JMC@SU-AI> 	Correspondence theory of truth and meta-epistemology 
Date: 17 Jan 83  2234 PST
From: John McCarthy <JMC@SU-AI>
Subject: Correspondence theory of truth and meta-epistemology 
To:   gavan@MIT-OZ
CC:   phil-sci@MIT-OZ  

Subject: Correspondence theory of truth and meta-epistemology
In reply to: GAVAN of 1983 jan 17 1736
jmc-	The meta-epistemology I propose is indeed a "God's eye view",
    but it doesn't presuppose or conclude that I have a God's eye view
    of this world.  It proposes that we begin by studying the problem
    of developing science within worlds of known structure.

GAVAN - Whose known structure?  This, by the way, is part of what Lakatos is
doing.  See his comments on generating and degenerating research
programs.

I evidently not made one thing perfectly clear.  "Whose known structure"
refers to mathematical structures concocted for studying meta-epistemology
which will not be anyone's conjecture of what the real world is like.
For example, the Conway life universe with its life physicists is such
a structure.  The question for mathematical study is what if any strategies
by this program will discover that it "lives" in the Conway universe.
The advantage is precisely that we can study the what strategies are
effective in what kinds of universe without presumptions about our
own universe.  Besides studying particular strategies in particular
hypothetical universes, we can try to prove general theorems about
what kinds of strategies will work in what kinds of universes.  My
conjecture is that various rigidly defined operationalist strategies
can be proved not to work.

	Remember also the point, admitted by almost everyone when
not arguing about scientific method, that we evolved very recently
in a world which shows no evidence of having been designed for our
convenience - intellectual convenience as well as physical
convenience.  There is no guarantee whatsoever that every aspect
of the world will ever be observable.  Presumably, meta-epistemology
will have theorems about when a world is completely knowable, and
I conjecture that the condition will be quite special and not very
plausible with regard to the real world.

	Mathematical platonists, of which I am one hold that there
is objective mathematical truth which mathematicians try to discover.
This truth does not depend on the physical world, and this fact
gives rise to difficulties in figuring just what kind of a beast it
is.  While many mathematicians hold formally other points of view,
formalist, logicist or constructivist of some flavor, lots will admit
that they informally act as Platonists.  Godel argued, and I tried
to paraphrase some of his arguments, that much of his success was
due to his Platonism.  Van Heijenoort, a historian of logic, doesn't
agree, so it's not unanimous about whether Godel's philosophy helped
his mathematics.

∂17-Jan-83  2315	Carl Hewitt <Hewitt at MIT-OZ at MIT-MC> 	The smallest description of the past is the best theory for the future?  
Date: Tuesday, 18 January 1983, 02:13-EST
From: Carl Hewitt <Hewitt at MIT-OZ at MIT-MC>
Subject: The smallest description of the past is the best theory for the future?
To: DAM at MIT-MC
Cc: MINSKY at MIT-OZ at MIT-MC, phil-sci at MIT-OZ at MIT-MC,
    Hewitt at MIT-OZ at MIT-MC
In-reply-to: The message of 15 Jan 83 19:46-EST from DAM at MIT-MC

    Mail-from: ARPANET site MIT-MC rcvd at 15-Jan-83 1949-EST
    Date: Saturday, 15 January 1983  19:46-EST
    Sender: DAM @ MIT-OZ
    From: DAM @ MIT-MC
    To:   MINSKY @ MIT-OZ
    Cc:   Hewitt @ MIT-OZ, phil-sci @ MIT-OZ
    Subject: Solomonoff and RElativity, etc.
    In-reply-to: The message of 15 Jan 1983  18:09-EST from MINSKY


            Perhaps the right way to view Solomonoff is as a method
    for choosing between competing theories.  While eliminates the
    issue of searching the space of theories one is still left with
    the halting problem (does a theory in fact predict x?, if my computations
    would only terminate I would tell you).  However I agree with Marvin
    (at least in the case of choosing between theories) that Solomonoff's
    work could be used as a practical guide.

It occurs to me that in my own scientific work that being the smallest
theory in the sense of Solomonoff et. al. is of secondary importance. 
What I am most concerned about is that the structure of a theory be
smoothly extendible in the future.  I am willing to accept quite a much
larger theory in order to gain the ability to more smoothly evolve.
This is particularly true for software systems that we construct and
evolve.  Is there any reason to believe that the minimum size theories
selected by Solomonoff etc. have the property of smooth structural
evolution?

∂18-Jan-83  0113	MINSKY @ MIT-MC 	The smallest description of the past is the best theory for the future?  
Date: Tuesday, 18 January 1983  04:07-EST
Sender: MINSKY @ MIT-OZ
From: MINSKY @ MIT-MC
To:   Carl Hewitt <Hewitt @ MIT-OZ>
Cc:   DAM @ MIT-OZ, phil-sci @ MIT-OZ
Subject: The smallest description of the past is the best theory for the future?
In-reply-to: The message of 18 Jan 1983 02:13-EST from Carl Hewitt <Hewitt>


HEWITT: I am willing to accept quite a much larger theory in order to
gain the ability to more smoothly evolve.  This is particularly true
for software systems that we construct and evolve.  Is there any
reason to believe that the minimum size theories selected by
Solomonoff etc. have the property of smooth structural evolution?

What could you mean by "extendible", except that the additions be of
minimum size.  Then you might interpret the Solomonoff formula as
insisting that one should always use the formula that has been most
smoothly extendible in the past - but only for the next few bits (with
exponential decay weighting).  Notice that it is subject to
reformulation whenever profitable.

Do you think your question is meaningful enough without a definition of
extendible?

∂18-Jan-83  0637	GAVAN @ MIT-MC 	Non-technical Chaitin's papers    
Date: Tuesday, 18 January 1983  09:29-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   ISAACSON @ USC-ISI
Cc:   phil-sci @ MIT-MC
Subject: Non-technical Chaitin's papers
In-reply-to: The message of 17 Jan 1983  20:17-EST from ISAACSON at USC-ISI

    Date: Monday, 17 January 1983  20:17-EST
    From: ISAACSON at USC-ISI
    To:   GAVAN
    cc:   phil-sci at MIT-MC, isaacson at USC-ISI
    Re:   Non-technical Chaitin's papers

    The Scientific American paper JMC cited is:

    Randomness and Mathematical Proof by Gregory J. Chaitin, Sci.
    Am., May 1975, pp.  47-52.

Yes. I read this this morning.  I'm still at a loss trying to understand
how this improves upon Lakatos.

∂18-Jan-83  0637	GAVAN @ MIT-MC 	correspondence theory of truth    
Date: Tuesday, 18 January 1983  09:22-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   BATALI @ MIT-OZ
Cc:   John McCarthy <JMC @ SU-AI>, phil-sci%mit-oz @ MIT-MC
Subject: correspondence theory of truth   
In-reply-to: The message of 17 Jan 1983  18:04-EST from BATALI

    Date: Monday, 17 January 1983  18:04-EST
    From: BATALI
    Sender: BATALI
    To:   GAVAN
    cc:   John McCarthy <JMC at SU-AI>, phil-sci%mit-oz at MIT-MC
    Re:   correspondence theory of truth   

        Date: Monday, 17 January 1983  17:36-EST
        From: GAVAN
      
        Yet it also seems apparent that the correspondence
        theory requires a dualist metaphysics.  That's my problem with it.
        I'm an internalist.

    If this is the main problem with the correspondence theory, it would
    be nice to see an explanation of how the a dualistic metaphysics is
    required.

Well, if there is to be a correspondence, there would have to be a
correspondence between something and something else.  Now, as I
understand the correspondence theory, there's supposed to be a
correspondence between something "in the mind" and something "out
there in the world."  I don't think the two can be bifurcated so
neatly.  I don't know if that's the "main" problem with the
correspondence theory, but that's my problem with it.  How do we know
we're not just brains in a vat and we just imagine the world?  For
a fuller explication see Putnam's *Reason, Truth and History*.

∂18-Jan-83  0749	Carl Hewitt <Hewitt at MIT-OZ at MIT-MC> 	The smallest description of the past is the best theory for the future?  
Date: Tuesday, 18 January 1983, 10:43-EST
From: Carl Hewitt <Hewitt at MIT-OZ at MIT-MC>
Subject: The smallest description of the past is the best theory for the future?
To: MINSKY at MIT-MC
Cc: Carl Hewitt <Hewitt at MIT-OZ at MIT-MC>, phil-sci at MIT-OZ at MIT-MC,
    Hewitt at MIT-OZ at MIT-MC
In-reply-to: The message of 18 Jan 83 04:07-EST from MINSKY at MIT-MC

    Received: from MIT-MC.ARPA by MIT-XX.ARPA with TCP; Tue 18 Jan 83 04:13:19-EST
    Date: Tuesday, 18 January 1983  04:07-EST
    Sender: MINSKY @ MIT-OZ
    From: MINSKY @ MIT-MC
    To:   Carl Hewitt <Hewitt @ MIT-OZ>
    Cc:   DAM @ MIT-OZ, phil-sci @ MIT-OZ
    Subject: The smallest description of the past is the best theory for the future?
    In-reply-to: The message of 18 Jan 1983 02:13-EST from Carl Hewitt <Hewitt>


    HEWITT: I am willing to accept quite a much larger theory in order to
    gain the ability to more smoothly evolve.  This is particularly true
    for software systems that we construct and evolve.  Is there any
    reason to believe that the minimum size theories selected by
    Solomonoff etc. have the property of smooth structural evolution?

    What could you mean by "extendible", except that the additions be of
    minimum size.  Then you might interpret the Solomonoff formula as
    insisting that one should always use the formula that has been most
    smoothly extendible in the past - but only for the next few bits (with
    exponential decay weighting).  Notice that it is subject to
    reformulation whenever profitable.

    Do you think your question is meaningful enough without a definition of
    extendible?

This is what I am worried about:

         At any given point the Solomonoff et. al. method will choose
      the smallest program that accounts for past usage.  Unfortunately
      the program chosen will always be over optimized and very
      brittle.  It will have to be completely rewritten in order to
      be suitable for the next usage.

Is there any reason to believe that the above worry is groundless?

∂18-Jan-83  1214	John McCarthy <JMC@SU-AI> 	Correspondence theory of truth   
Date: 18 Jan 83  1204 PST
From: John McCarthy <JMC@SU-AI>
Subject: Correspondence theory of truth   
To:   gavan@MIT-OZ, batali@MIT-OZ
CC:   phil-sci@MIT-OZ  

Subject: Correspondence theory of truth
In reply to: GAVAN of 1983 jan 18 0622EST
The Tarski theory ascribes truth to sentences, so the correspondence
is between these sentences and the world.  How these sentences are
represented, whether in a physical structure or (as might be preferred
by a dualist if dualists exist) in a mind is, strictly speaking,
not part of the theory.  In my proposed meta-epistemology, the
sentences (or more abstract objects called propositions) are represented
in the "memory" of the "scientist" part of the system.  Thus in the
Conway life world physicist, the question is whether the fact that
their fundamental physics is life, will become represented in the
language we are talking about in the memory of the "physicist" or
in their "Physical review".  Both the memory of the physicist and
the Physical review are encoded in complicated configurations of
life cells being on or off.  Thus we are talking about a correspondence
between two physical structures.


∂18-Jan-83  1316	Gavan Duffy <GAVAN at MIT-OZ at MIT-MC> 	Correspondence theory of truth    
Date: Tuesday, 18 January 1983, 16:03-EST
From: Gavan Duffy <GAVAN at MIT-OZ at MIT-MC>
Subject: Correspondence theory of truth   
To: JMC at SU-AI, gavan at MIT-OZ at MIT-MC, batali at MIT-OZ at MIT-MC
Cc: phil-sci at MIT-OZ at MIT-MC

    Date: 18 Jan 83  1204 PST
    From: John McCarthy <JMC@SU-AI>
    Subject: Correspondence theory of truth   

    The Tarski theory ascribes truth to sentences, so the correspondence
    is between these sentences and the world.  How these sentences are
    represented, whether in a physical structure or (as might be preferred
    by a dualist if dualists exist) in a mind is, strictly speaking,
    not part of the theory.  In my proposed meta-epistemology, the
    sentences (or more abstract objects called propositions) are represented
    in the "memory" of the "scientist" part of the system.  Thus in the
    Conway life world physicist, the question is whether the fact that
    their fundamental physics is life, will become represented in the
    language we are talking about in the memory of the "physicist" or
    in their "Physical review".  Both the memory of the physicist and
    the Physical review are encoded in complicated configurations of
    life cells being on or off.  Thus we are talking about a correspondence
    between two physical structures.

In actuality you have three structures.  One is the structure of the
physicist's beliefs.  Another is the structure of the language he/she
speaks.  The third is the structure of the physical world.  Is there
necessarily a correspondence between any two of these?  Why should there
be?  In what sense does what's in the "Physical Review" correspond to
what's in the physicist's head?  Is there a one-to-one mapping between
the language of the physicist and the physical world?  Could there ever
be?  Or is there a many-to-one mapping, or a many-to-many mapping?
Whatever the nature of the mapping, how do you propose to define the
transformations necessary to go from one to another?

I would agree that it's important to assess whether a scientific
discipline or even many normal-science paradigms within one discipline
is engaging in a progressive or a degenerating problemshift, which is
what I think you have in mind.  Lakatos' method seems to me to be much
more do-able.  Just check the recent additions of ad hoc hypotheses
(ceteris paribus clauses) to the protective belt of a theory, assessing
how much additional corroborated empirical content they provide.  The
more they provide, the more progressive they are.

Anyway, I'm not totally opposed to your project, but I can't see how
it's practicable, not only because of the state of contemporary
technology, but also because I think you're positing a correspondence
that just isn't there.  I remained to be convinced.  Can you show that a
correspondence between (a) the mind and the world, or (b) the mind and
the system of reference, or (c) the world and the system of reference
necessarily exists?  

Maybe you don't mean it to be practicable.

∂18-Jan-83  1352	Jon Amsterdam <JBA at MIT-OZ> 	The smallest description of the past is the best theory for the future?   
Date: Tuesday, 18 January 1983, 16:22-EST
From: Jon Amsterdam <JBA at MIT-OZ>
Subject: The smallest description of the past is the best theory for the future?
To: MINSKY at MIT-MC, Hewitt at MIT-OZ
Cc: DAM at MIT-OZ, phil-sci at MIT-OZ
In-reply-to: The message of 18 Jan 83 04:07-EST from MINSKY at MIT-MC

    Mail-From: MINSKY created at 18-Jan-83 04:07:17
    Date: Tuesday, 18 January 1983  04:07-EST
    Sender: MINSKY @ MIT-OZ
    From: MINSKY @ MIT-MC
    To:   Carl Hewitt <Hewitt @ MIT-OZ>
    Cc:   DAM @ MIT-OZ, phil-sci @ MIT-OZ
    Subject: The smallest description of the past is the best theory for the future?
    In-reply-to: The message of 18 Jan 1983 02:13-EST from Carl Hewitt <Hewitt>


    HEWITT: I am willing to accept quite a much larger theory in order to
    gain the ability to more smoothly evolve.  This is particularly true
    for software systems that we construct and evolve.  Is there any
    reason to believe that the minimum size theories selected by
    Solomonoff etc. have the property of smooth structural evolution?

    What could you mean by "extendible", except that the additions be of
    minimum size. 

I don't care how much I have to add, as long as I can do it cleanly.  For my
software systems, I want to be able to add a lot of stuff, but with minimum
modification to what exists, i.e. I want highly modular systems.  It seems
that modular systems are bigger (at first at least) than non-modular (open-coded?)
ones.  It's fine to optimize division by 2 as a right shift until you want
to divide by 3.

    Then you might interpret the Solomonoff formula as
    insisting that one should always use the formula that has been most
    smoothly extendible in the past - but only for the next few bits (with
    exponential decay weighting).  Notice that it is subject to
    reformulation whenever profitable.

This still would seem to require frequent rewrites.  I'm not sure I understand it.
In any case, didn't DAM say that Solomonoff provides a single, precise definition
of what he means by simplicity?  Didn't Gavan ask DAM to provide this? I don't
recall having seen it yet.

∂18-Jan-83  1448	Gavan Duffy <GAVAN at MIT-OZ at MIT-MC> 	The smallest description of the past is the best theory for the future?   
Date: Tuesday, 18 January 1983, 17:27-EST
From: Gavan Duffy <GAVAN at MIT-OZ at MIT-MC>
Subject: The smallest description of the past is the best theory for the future?
To: MINSKY at MIT-OZ at MIT-MC, Hewitt at MIT-OZ at MIT-MC
Cc: DAM at MIT-OZ at MIT-MC, phil-sci at MIT-OZ at MIT-MC

    Date: Tuesday, 18 January 1983  04:07-EST
    From: MINSKY

    HEWITT: I am willing to accept quite a much larger theory in order to
    gain the ability to more smoothly evolve.  This is particularly true
    for software systems that we construct and evolve.  Is there any
    reason to believe that the minimum size theories selected by
    Solomonoff etc. have the property of smooth structural evolution?

    What could you mean by "extendible", except that the additions be of
    minimum size.  Then you might interpret the Solomonoff formula as
    insisting that one should always use the formula that has been most
    smoothly extendible in the past - but only for the next few bits (with
    exponential decay weighting).  Notice that it is subject to
    reformulation whenever profitable.

In the "softer" sciences, such as the social sciences, a criterion that
the additions be of minimal sizes immediately cuts off any possibility
of radical theoretical reformulation.  It's a very conservative
criterion.  There are many examples of theoretical progress made in the
social sciences by theoreticians who purposefully declined to make
incremental changes to bogus theories.  They instead developed better
and far more complex theories.  

∂18-Jan-83  1448	Gavan Duffy <GAVAN at MIT-OZ> 	The smallest description of the past is the best theory for the future?   
Date: Tuesday, 18 January 1983, 16:33-EST
From: Gavan Duffy <GAVAN at MIT-OZ>
Subject: The smallest description of the past is the best theory for the future?
To: Hewitt at MIT-OZ, DAM at MIT-OZ
Cc: MINSKY at MIT-OZ, phil-sci at MIT-OZ

    Date: Tuesday, 18 January 1983, 02:13-EST
    From: Carl Hewitt <Hewitt>
    To:   DAM

	Mail-from: ARPANET site MIT-MC rcvd at 15-Jan-83 1949-EST
	Date: Saturday, 15 January 1983  19:46-EST
	Sender: DAM @ MIT-OZ

		Perhaps the right way to view Solomonoff is as a method
	for choosing between competing theories.  While eliminates the
	issue of searching the space of theories one is still left with
	the halting problem (does a theory in fact predict x?, if my computations
	would only terminate I would tell you).  However I agree with Marvin
	(at least in the case of choosing between theories) that Solomonoff's
	work could be used as a practical guide.

    It occurs to me that in my own scientific work that being the smallest
    theory in the sense of Solomonoff et. al. is of secondary importance. 
    What I am most concerned about is that the structure of a theory be
    smoothly extendible in the future.  I am willing to accept quite a much
    larger theory in order to gain the ability to more smoothly evolve.
    This is particularly true for software systems that we construct and
    evolve.  Is there any reason to believe that the minimum size theories
    selected by Solomonoff etc. have the property of smooth structural
    evolution?

I agree with Carl.  In the social sciences, the smallest theories are
often the most useless and sometimes the most dangerous (the
balance-of-power theory is the simplest theory in contemporary
international relations).  If DAM's description of Solomonoff's theory
is correct, then it's probably useless in any empirical science and can
be left to the mathematicians.  At any rate, if this is what
Solomonoff's theory is, then it's just a mathematical presentation of
Duhem's theory, presented in 1905 and debunked in the 1930s by Karl
Popper.  Aesthetic criteria may be perfectly legitimate when your
problem domain is pure, when you are investigating the properties of bit
strings or real numbers or some such.  But aesthetic criteria can be
dangerously misleading when your problem domain is empirical.

I'm still not sure that Solomonoff is irrelevant to the empirical
sciences though.  Aesthetic criteria may be legitimate for extending
empirical theories, taking as given and unproblematic the various
theories which must be assumed in order to conceive of the theory in
question.  If Solomonoff can indeed be interpreted this way, then
Lakatos' critique of Popper can be modified, saving it from Feyerabend's
irrationalist attack.

∂18-Jan-83  1506	John McCarthy <JMC@SU-AI> 	correspondence theory  
Date: 18 Jan 83  1500 PST
From: John McCarthy <JMC@SU-AI>
Subject: correspondence theory  
To:   gavan@MIT-OZ, batali@MIT-OZ
CC:   phil-sci@MIT-OZ  

Subject: correspondence theory
In reply to: GAVAN of 1983 jan 18
I don't have a design for a life-world physicist. The point is the
definition of truth.  Suppose we have such a physicist program
in the life-world, and it generates certain sentences in the
configuration of life cells we call its memory, and we have
an interpretation of these sentences ask making assertions about
the life world.  Then, we say that a sentences is true provided
what it asserts about the life world is true.  For example, the
life physicist may produce sentences interpretable as a theory
of cellular automata.  There may be another sentences in the
same language asserting that the particular world is a certain
cellular automaton.  If it asserts that its world is the life
automaton, we have the desired CORRESPONDENCE between the sentence
and the world.

To recapitulate: we are talking about a correspondence definition
of truth, not about how to design a life world physicist.  The latter
would be difficult in the present state of AI.


∂18-Jan-83  1515	Gavan Duffy <GAVAN at MIT-OZ at MIT-MC> 	Lakatos and Solomonoff  
Date: Tuesday, 18 January 1983, 17:50-EST
From: Gavan Duffy <GAVAN at MIT-OZ at MIT-MC>
Subject: Lakatos and Solomonoff  
To: JMC at SU-AI
Cc: phil-sci at MIT-OZ at MIT-MC

    Date: Monday, 17 January 1983  14:23-EST
    From: John McCarthy <JMC at SU-AI>
    To:   gavan
    cc:   dam, phil-sci
    Re:   Lakatos and Solomonoff  

    Lakatos, if Proofs and Refutation is the book in question is concerned
    with the social process whereby the mathematical community comes to
    accept a theory.  Perhaps it also supposes that the meaning of a theorem
    and its truth is also socially determined.

The work in question is "Falsification and the Methodology of Scientific
Research Programmes," in Lakatos and Musgrave, eds., *Criticism and the
Growth of Knowledge* (Cambridge University Press, 1970), pp. 91-196.  An
FTP-able summary (which excludes Lakatos' evidence drawn from the
history of the physical sciences) is on OZ.  Hmmm... Since OZ is on the
wrong network for you, I'll put a copy on AI.  It will be FTP-able
from AI:JCMA;LAKA TOS

∂18-Jan-83  1521	Gavan Duffy <GAVAN at MIT-OZ at MIT-MC> 	Consensus Theory of Truth    
Date: Tuesday, 18 January 1983, 18:14-EST
From: Gavan Duffy <GAVAN at MIT-OZ at MIT-MC>
Subject: Consensus Theory of Truth
To: BATALI at MIT-OZ at MIT-MC, DAM at MIT-OZ at MIT-MC
Cc: phil-sci at MIT-OZ at MIT-MC

    Date: Sunday, 16 January 1983  17:47-EST
    From: BATALI
    Sender: BATALI

	Date: Sunday, 16 January 1983  16:59-EST
	From: DAM

	Lakatos's objections
	to simplism (that it leads to subjectivism) are unfounded when one looks
	at the mathematical details of Solomonoff et. al.'s theory.

    In fact, I think that this is the main value of the work, the thing
    that makes Marvin refer to the post-Solomonov era as a time for
    reformulating some philosophy of science ideas.  But it does not solve
    very many of the problems -- it just shows that a notion of "simpler"
    can be given an objective treatment.  There are still big problems in,
    for example, finding the simpler formulation; recognizing that it is
    simpler; convincing others that it is simpler and so on.  These
    problems are what scientists do from day to day and the view of
    science as a communicating community may be more valuable in working
    them out.  In fact, despite the sucess of the Solomonov approach, it
    might be worthwhile to treat Occam's razor not as an objectively
    defined notion, but rather as a roughly defined high-level goal.  That
    is: "Because it is simpler" is allowed as a valid reason in a
    scientific argument.  Showing that it is indeed simpler will take more
    argument, and those arguments can take many forms, from Solomonov to
    appeals to "elegance."  But the point is that the first justification
    of the theory is simplicity -- which is then itself justified.  This
    approach is essentially that of Doyle, and I take it to be very much
    different from that of mathematics, in which to introduce a term like
    "simple" one must exhaustively define it.  In Doyle's approach,
    statements are justified not by the definitions of the included terms
    (though such definitions, if they exist, will play a part) but by the
    support they get from other statements.

Notice that this is a characterization of the coherence theory of truth.
I would add that an objection to simplism based on the subjectivism to
which it leads cannot be discounted by looking at mathematical details,
since mathematical details are, by their nature, non-subjective.
Solomonoff's theory (which does not differ substantively from Occam's or
from Duhem's so far as I can tell) becomes unfounded the farther it is
taken from the pure domain of mathematics.

	    I think that the corrosponence theory of truth works fine for
	mathematical truth (and this probably why its adherents are largely
	mathematicians).

    I don't think so.  The correspondence theory of truth says, at bare
    bottom, that the truth of statements depends on the way the world is.
    Mathematical truth is precisely that which does not depend on the
    world at all. . . .

    I would take the COHERENCE view of truth -- in which the truth of a
    member of a set of statements depends only on properties of the
    statements and relations among them -- to be much more appealing to a
    math person because the notion of coherence might be more easily
    formulated mathematically.

Isn't coherence what axiom systems are all about?

    Perhaps the best way to go is to agree that coherence is important,
    but there is some objective "world" that is the ultimate arbiter of
    the truth of statements.  Arbitration consists in checking sense data
    against predictions.

Well, you can do this, but you'll have to assume the existemce of the
"objective world" as an article of faith.  The problem with this sort of
arbitration is that for us, just as for the rat and the rat-psychologist
(remember them?), our concepts (theories) affect our percepts (empirical
results).

	However where empirical truth is concerned it
	seems to me that one is best off DEFINING the world to be behaviour
	and sense data.

    It seems to me that the world has cows and clouds and atoms and minds
    and everything.  Defining all that away is a big price to pay for
    results that aren't in yet.  And why can't we understand the world in
    terms of cows and clouds?  I think that we do, as people.  So why
    don't we try to understand (as AIers) how we (as people) understand
    the world in those realistic terms?

Right on!

∂18-Jan-83  1549	GAVAN @ MIT-MC 	correspondence theory   
Date: Tuesday, 18 January 1983  18:39-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   John McCarthy <JMC @ SU-AI>
Cc:   batali @ MIT-OZ, phil-sci @ MIT-OZ
Subject: correspondence theory  
In-reply-to: The message of 18 Jan 83  1500 PST from John McCarthy <JMC at SU-AI>

    Date: 18 Jan 83  1500 PST
    From: John McCarthy <JMC at SU-AI>
    To:   gavan, batali
    cc:   phil-sci at MIT-OZ
    Re:   correspondence theory  

    Subject: correspondence theory
    In reply to: GAVAN of 1983 jan 18
    I don't have a design for a life-world physicist. The point is the
    definition of truth.  Suppose we have such a physicist program
    in the life-world, and it generates certain sentences in the
    configuration of life cells we call its memory, and we have
    an interpretation of these sentences ask making assertions about
    the life world.  Then, we say that a sentences is true provided
    what it asserts about the life world is true.  For example, the
    life physicist may produce sentences interpretable as a theory
    of cellular automata.  There may be another sentences in the
    same language asserting that the particular world is a certain
    cellular automaton.  If it asserts that its world is the life
    automaton, we have the desired CORRESPONDENCE between the sentence
    and the world.

Which is it?  A correspondence between a sentence and the world or a
correspondence between two sentences?  The latter is trivial, and
would not seem to have much to do with "truth" at all.  The former is
impossible, since the world and words and sentences in a language
aren't coextensive.  Not only are there be multiple ways of expressing
the same thing, but there are also some things that are ineffable.
Some people have private languages for expressing certain things.
Some people mean different things by the same utterances.  There is no
correspondence, not one you can describe, anyway.

See Putnam's "The Meaning of `Meaning'" in volume I of his
Philosophical Papers.

∂18-Jan-83  1827	PHIL-SCI-REQUEST@MIT-MC 	List Info:  Distributed Indexation 
Date: Tuesday, 18 January 1983, 21:17-EST
From: PHIL-SCI-REQUEST@MIT-MC
Sender: JCMa@MIT-OZ at MIT-ML
Subject: List Info:  Distributed Indexation
To: PHIL-SCI@MIT-OZ at MIT-ML

The archive has been moved to:

	 OZ:TINMAN:<COMMON>PHILOSOPHY-OF-SCIENCE-ARCHIVES.BABYL

The inbox for this babyl file is the old archive file:

	   OZ:SRC:<COMMON>PHILOSOPHY-OF-SCIENCE-ARCHIVES.TXT



The idea is for people to be able to read the archive using ZMAIL (or
BABYL until it gets too big).  Readers of the archive are encouraged to
put ZMAIL "keywords" (or BABYL "labels", which are the same thing) on
messages in order to make easy to find discussions on particular topics.
Try to use existing keywords (labels) when they already exist.

When somebody reads the archive, they can type "G" to get recent mail
to the archive.  This will get the mail from the in-box and transfer
it to the babyl file on TINMAN:.  (FTPers bear this in mind)

When reading the archive using ZMAIL or BABYL, be sure *NOT* to munge
the files.  Good heuristicsfor not munging the archive:  

1) Don't use the MIT system for this until its ZMAIL is debugged.

2) If you use the Symbolics ZMAIL, be sure to use Release 4 *AND* load patches.

3) If something strange happens and there is any doubt about munging files
   seek expert assistance forthwith.

(send questions or comments to: PHIL-SCI-REQUEST@MC)

p.s. This is the 196th message on the list (which was created on January 5, 1983).

∂18-Jan-83  1941	CSD.BRODER@SU-SCORE (SuNet)  	Next AFLB talk(s)   
Date: Tue 18 Jan 83 13:33:01-PST
From: Andrei Broder <CSD.Broder@SU-SCORE.ARPA>
Subject: Next AFLB talk(s)
To: aflb.all@SU-SCORE.ARPA
cc: CSD.DORIO@SU-SCORE.ARPA
Stanford-Office: MJH 325, Tel. (415) 497-1787

                   N E X T   A F L B   T A L K (S)

1/20/83 Prof. Ernst Mayr (Stanford):

                 "Recent Results for UET-Seheduling"

Scheduling is the assignment over time of resources to tasks, under  a
variety of  possible constraints  and optimization  criteria. In  many
cases, the  resources  are  just processors  running  in  parallel.  A
typical example  would be  to assign  a  set of  tasks which  have  to
satisfy certain precedence constraints and  each require unit time  to
execute (UET), to  some number of  identical parallel processors  such
that the overall execution time is minimized. UET-scheduling is  known
to be NP-complete if the number  of processors is part of the  problem
instances.

If the number of processors is a fixed m , polynomial solutions to the
general UET-scheduling problem have been known so far only for m=1 and
m=2 . We discuss  a quite simple  polynomial scheduling algorithm  for
m>2 and a new restricted class of precedence constraints, and we  also
outline  a  polynomial  algorithm  for  m=3  and  general   precedence
constraints.

******** Time and place: Jan. 20, 12:30pm in MJ352 (Bldg. 460) *******

1/27/83 Dr. Narrendra Karmarkar (IBM San Jose):

"An Efficient Approximation Scheme for the One-Dimensional Bin-Packing
Problem"

                ++++ Abstract not yet available. ++++

******* Time and place: Jan. 27, 12:30pm in MJ352 (Bldg. 460) ********

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Regular AFLB meetings are  on Thursdays, at  12:30pm, in MJ352  (Bldg.
460).

If you have a topic you would  like to talk about in the AFLB  seminar
please tell  me.  (csd.broder@score,  MJH325, 497-1787)  Contributions
are wanted and welcome. Not all time slots for this academic year have
been filled so far.

For more information about future  AFLB meetings and topics you  might
want to look at the file [SCORE]<csd.broder>aflb.bbboard.
-------

∂18-Jan-83  2128	ISAACSON at USC-ISI 	Summaries, please ...   
Date: 18 Jan 1983 2112-PST
Sender: ISAACSON at USC-ISI
Subject: Summaries, please ...
From: ISAACSON at USC-ISI
To: PHIL-SCI at MIT-MC
Cc: isaacson at USC-ISI
Message-ID: <[USC-ISI]18-Jan-83 21:12:49.ISAACSON>

This is a request from the sidelines -


I didn't realize that this list generated some 200 messages in
two weeks.  Perhaps it's time to pause and consolidate.

Can someone volunteer to summarize the main themes and positions
in this discussion?

Or, how about each of the more frequent participants [you know
who you are ...]  summarizing his position?

Thanks.

-- JDI


∂18-Jan-83  2300	John McCarthy <JMC@SU-AI> 	Correspondence theory  
Date: 18 Jan 83  2231 PST
From: John McCarthy <JMC@SU-AI>
Subject: Correspondence theory  
To:   gavan@MIT-OZ, batali@MIT-OZ
CC:   phil-sci@MIT-OZ  

Subject: Correspondence theory
In reply to: GAVAN of 1983 Jan 18 1549
A last try:  If the life automaton writes in its memory a sentence
asserting in the language we suppose it to be using that the physics
of its world is life, i.e. writes it in a sense similar to that in which
our physicists write that our universe satisfies the genral theory of
relativity, then we
say that the sentence is true, because what it says corresponds to
the structure of its world.  This is the sense in which Russell,
who invented the term "correspondence theory" and the other advocates
of the theory, going back to the ancient Greeks meant to define truth
by correspondence.  Defining truth by such correspondences is distinct
from defining it by coherence or by consensus.
Volume 1 of Putnam's Philosophical Papers doesn't include one called
"The Meaning of Meaning", but Putnam generally characterizes himself
as a realist, both in physics and mathematics, e.g. on page 60 in an
article entitled "What is mathematical truth?", he begins, " In this
paper I argue that mathematics should be interpreted realistically
- that is, that mathematics makes assertions that are objectively true
or false, independently of the human mind, and that SOMETHING answers
to such mathematical notions as 'set' and 'function'."
I can't find Volume 2 at the moment.


∂19-Jan-83  0203	GAVAN @ MIT-MC 	Putnam, Life Worlds, Real Worlds, Natural Language, and Natural Numbers.  
Date: Wednesday, 19 January 1983  04:56-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   John McCarthy <JMC @ SU-AI>
Cc:   batali @ MIT-OZ, shooting-gallery @ MIT-OZ
Subject: Putnam, Life Worlds, Real Worlds, Natural Language, and Natural Numbers. 
In-reply-to: The message of 18 Jan 83  2231 PST from John McCarthy <JMC at SU-AI>

    Date: 18 Jan 83  2231 PST
    From: John McCarthy <JMC at SU-AI>
    To:   gavan, batali
    cc:   phil-sci at MIT-OZ
    Re:   Correspondence theory  

    Subject: Correspondence theory
    In reply to: GAVAN of 1983 Jan 18 1549
    A last try:  If the life automaton writes in its memory a sentence
    asserting in the language we suppose it to be using that the physics
    of its world is life, i.e. writes it in a sense similar to that in which
    our physicists write that our universe satisfies the genral theory of
    relativity, then we say that the sentence is true, because what it says 
    corresponds to the structure of its world.  

I understand what you're saying, I just disagree.  
	
    This is the sense in which Russell,
    who invented the term "correspondence theory" and the other advocates
    of the theory, going back to the ancient Greeks meant to define truth
    by correspondence.  Defining truth by such correspondences is distinct
    from defining it by coherence or by consensus.

The coherence theory is both distinct from and incompatible with the
correspondence theory.  The consensus theory is distinct from both but
not incompatible with either.  The consensus theory is posited at a
different (social) level of analysis.  The mathematical "truths" would
not be "true" if mathematicians did not consent to them.  Perhaps they
consent to them because they think they correspond to something,
perhaps not.  But either way, they consent to them nevertheless.

    Volume 1 of Putnam's Philosophical Papers doesn't include one called
    "The Meaning of Meaning", but Putnam generally characterizes himself
    as a realist, both in physics and mathematics, e.g. on page 60 in an
    article entitled "What is mathematical truth?", he begins, " In this
    paper I argue that mathematics should be interpreted realistically
    - that is, that mathematics makes assertions that are objectively true
    or false, independently of the human mind, and that SOMETHING answers
    to such mathematical notions as 'set' and 'function'."
    I can't find Volume 2 at the moment.

Well, I probably did mix the volume numbers.  "The Meaning of
`Meaning'" has a nice critique of the correspondence theory, whichever
volume it's in.

Please remember that physics and mathematics are not everything!  Not
every theory in every discipline can be stated mathematically.  when
making a truth-claim about a theory of truth, you have to be very
careful that your range of application is not too narrow.

Putnam does not commit himself in the quotation you present.  He says
that mathematics should be INTERPRETED realistically, but it doesn't
necessarily follow that he is therefore a realist.  Asserting that
"mathematicians make assertions that are objectively true or false,
independently of the human mind" is not the same as asserting that
human minds have independent access to the objective truth or
falsehood of those assertions.  Even if Putnam is a realist in physics
and mathematics (he's probably more of a pragmatist), need he also be
a realist in, say, metaphysics?  He is in fact, neither a metaphysical
realist nor a metaphysical idealist.  As long as we're quoting, here's
a brief passage from *Reason, Truth and History*, page xi.

  "I shall advance a view in which the mind does not simply `copy' a
  world which admits of description by One True Theory.  But my view is
  not a view in which the mind makes up the world, either . . . .  If
  one must use metaphorical language, then let the metaphor be this: the
  mind and the world jointly make up the mind and the world."

Here's two more, from page 73.  

  "The trouble . . . is not that correspondences between words or
  concepts and other entities don't exist, but that too many
  correspondences exist.  To pick out just one correspondence between
  words or mental signs and mind-independent things we would have
  already to have referential access to the mind-independent things.
  You can't single out a correspondence between two things by just
  squeezing one of them hard [or doing anything else to just one of
  them); you cannot single out a correspondence between our concepts and
  the supposed noumenal objects without access to the noumenal objects."
 
  "To an internalist this is not objectionable: why should there not
  sometimes be equally coherent but incompatible conceptual schemes
  which fit our experiential beliefs equally well?  If truth is not
  (unique) correspondence then the possibility of a certain pluralism is
  opened up.  But the motive of the metaphysical realist is to save the
  notion of the God's Eye Point of View, i.e., the One True Theory."

Perhaps you could interpret this one for me.  It's on pages 68-69.

  "First of all, there is the question of finitism: human practice,
  actual and potential, extends only finitely far.  Even if we say we
  can, we cannot `go on counting forever.'  If there are possible
  divergent extensions of our practice, then there are possible
  divergent extensions of even the natural number sequence -- our
  practice, or our mental representations, etc., do not single out a
  unique `standard model' of the natural number sequence.  We are
  tempted to think they do because we easily shift from `we could go on
  counting' to `an ideal machine could go on counting' (or, `an ideal
  mind could go on counting'); but talk of ideal machines (or minds) is
  very different from talk of actual machines and persons.  Talk of what
  an ideal machine could do is talk WITHIN mathematics, it cannot fix
  the interpretation OF mathematics."

∂19-Jan-83  1037	DAM @ MIT-MC 	Consensus Theory of Truth 
Date: Wednesday, 19 January 1983  13:32-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   Batali @ MIT-OZ
cc:   phil-sci @ MIT-OZ
Subject: Consensus Theory of Truth


	Date: Monday, 17 January 1983  18:12-EST
	From: BATALI
	Sender: BATALI

	    From: DAM

	    What do you mean by "access to"?

	The ability to be causally affected by; the ability to causally affect.

	...
	Sorry. Can't help (believing in a real world).

	...
	I'm confused.  Are you claiming that the world is necessarily some
	mathematical object?  Or that the world can be described using
	mathematics?  Do you think that there is an important difference?

	I do not mean to claim that our understanding of the world (or
the world itself) must be mathematical in nature, though I admitt my message
does give that impression.  However I understand mathematical things
in a much deeper (more complete) way than I understand other things.
There is an "isomorphism property" of mathematical objects which I am
convinced also holds of ALL objects and worlds, though by definition I
can not PROVE any claim about non-mathematical objects.  Let me try to
explain this property in non-mathematical terms.  In order to do this
I will adopt a realist point of view and assume that there is a world
which I have access to (using Batali's defintion of access).
	Consider "our world" with clouds and cows and all (I am for
the moment assuming that there is such a world).  There are also certain
ways in which we have access to this world (all of which are necessarilly
mediated by senses and behaviour).  What I am claiming is that it is
possible to concieve of other worlds which generate the same sense data
and the same sensed respnses to behaviour.  The most obvious example
is a world in which we are all brains in a vat and a giant computer is
taking our beharioural responses and feeding us sense data.  Surely we
all agree that such a world would be indestinguishable from ours (I am
assuming a world in which the computer actually does similulate our world
exactly, I don't care if it takes magic to do it).  There are more
interesting worlds which would also generate the same sense data.
For example it might be a world in which everything has a "color"
which we in principle can't see (it doesn't interact with anything which
is physically observable).  I believe that our universe is isomorphic
to a function of a certain mathematical space, and thus I find the
Fourier transform example quite interesting.

	I will call worlds which generate the same sense data "isomorphic".
Is Batali saying that he can tell the difference between isomorphic
worlds?  Is he saying that there are worlds which are not isomorphic
to any other (different) worlds?  If isomorphic worlds are indistinguishable
and every world is isomorphic to a different world why are we concerned
with a particular world which we "can call" our own?  If one considers
all possible worlds isomorphic to ours I think there may be a unique
"simplest" and this may be the "right" one to think about.

	David Mc.

∂19-Jan-83  1119	DAM @ MIT-MC 	The smallest description of the past is the best theory for the future?
Date: Wednesday, 19 January 1983  14:13-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   Hewitt @ MIT-OZ
cc:   phil-sci @ MIT-OZ
Subject: The smallest description of the past is the best theory for the future?


	Date: Tuesday, 18 January 1983, 10:43-EST
	From: Carl Hewitt <Hewitt>

	This is what I am worried about:

         At any given point the Solomonoff et. al. method will choose
      the smallest program that accounts for past usage.  Unfortunately
      the program chosen will always be over optimized and very
      brittle.  It will have to be completely rewritten in order to
      be suitable for the next usage.

	Is there any reason to believe that the above worry is groundless?

	I originally intended to respond to this with a defense of the
Solomonoff et. al. theory but the more I think about it the more
serious this criticism seems.  I do not take the Solomonoff theory to
be a realistic interpretation of Occam's razor because it does not
account for the notion of "statement" and "truth".  These
difficiencies were brought clearly to light (I think) by McCarthy's
example of the "ball and roofs" universe.  I think that the above
criticism of the Solomonoff theory can be viewed as a related and
important criticism.
	I think that Hewitt's criticism points out the following
potential problem in the Solomonoff et. al. theory: In the Solomonoff
et. al. theory there is no notion of "similarity" between theories, or
put another way there is no notion a "minor change" in a theory.
Given an "arbitrary" small program which generates experimental data
it seems that a small (one-bit) change in the program might lead to a
large change in the predicted data.  A "good" model of science (it
seems to me) should have some notion of similarity between theories,
and of similarity between sets of experimental data, such that small
changes in the theory lead to small changes in the predicted data.
This would provide a basis for "hill climbing" and heuristic search in
theory formation.  I think that a more complete model of scientific
theory formation, one based on ontological developement, STATED
hypotheses, and a notion of entailment, would be more likely to
exhibit the desired "continuity" relation between theories and
predictions.
	Of course in defense of Solomonoff one might assume that the
"theories" are programs which are hierarchically constructed and a
change is "minor" if it occurs in some high level procedure.  However
this gives no account of "similar" predictions and it still seems that
a "minor" theory change lead to a "magor" change in the predications.
	I do not take Hewitt's criticism of Solomonoff to be the most
important criticism however.  I still think the most important
criticism is the simple observation that people USE statements, a
notion of truth, and a notion of entailment in doing science.

	David Mc

∂19-Jan-83  1158	MINSKY @ MIT-MC 	The smallest description of the past is the best theory for the future?  
Date: Wednesday, 19 January 1983  14:52-EST
Sender: MINSKY @ MIT-OZ
From: MINSKY @ MIT-MC
To:   DAM @ MIT-OZ, MINSKY @ MIT-OZ, JMC @ SAIL
Cc:   Hewitt @ MIT-OZ, phil-sci @ MIT-OZ
Subject: The smallest description of the past is the best theory for the future?
In-reply-to: The message of 19 Jan 1983  14:13-EST from DAM


I guess no one has actually read Solomonoff.

The idea of a "small change" is just as obscure as that of "similar
theories", when you think about it.

As for hill-climbing, no one seems to grasp the full power - and the
full horror - of Solomonoff's idea!!!

For example, if a good basis for theory formation were "hill-climbing",
then, somewhere in the aggregate of relatively short Turing machine
programs would be one that

   (i) describes a formalism and a procedure for "hill-climbing" with it.

   (ii) describes some guidelines and/or exceptions.

   (iii) perhaps supplies some compactly-described "number" that tells
      how long to run that procedure before finding the allegedly
      good theory.

(This idea is also deeply embedded in Chaitin's thinking.)

As for "similar theories, I should add another, more siubtle point.
Solomonoff himself observed that a "single theory" might be brittle
and subject to sudden jumps with regard to reformulations. ( I am not
even sure that this is a serious concern, but he is.)   So Solomonoff
proposes that the way to do induction is NOT to use only the simplest
- that is, the shortest - description of the data.  Instead he considers
the ensemble of procedures that "account" for the data, and then
weights their predictions inversely with their lengths.  (The
weighting, he argues, corresponds to 2**(- length).  This appears to
give the method mor stability.

Finally, it is worth noting that this idea - that induction is based
on the ENSEMBLE of prediction methods that account for the data,
weighted by their complexity - seems to have been overlooked by
philosophers.   It makes that OCCAM idea seem laughably silly, to me,
at least in retrospect.  Why does everyone assume that there has to be
just one theory at a time.

∂19-Jan-83  1516	DAM @ MIT-MC 	The smallest description of the past is the best theory for the future?
Date: Wednesday, 19 January 1983  18:10-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   MINSKY @ MIT-OZ
cc:   phil-sci @ MIT-OZ
Subject: The smallest description of the past is the best theory for the future?


	Date: Wednesday, 19 January 1983  14:52-EST
	From: MINSKY

	The idea of a "small change" is just as obscure as that of "similar
	theories", when you think about it.

	Well perhaps not.  Consider a theory of dogs and a theory of numbers.
If one changes the theory of dogs it does not effect the theory of numbers.
Of course there are lots of cases where "relevance" is not so clearly defined.
Perhaps what one really wants is some precise theory of "relevance".
It seems to me that such a theory could be built on top of notions of statement
more easilly than on the theory of turing machines.

	if a good basis for theory formation were "hill-climbing",
	then, somewhere in the aggregate of relatively short Turing machine
	programs would be one that

	   (i) describes a formalism and a procedure for
		"hill-climbing" with it.

	   (ii) describes some guidelines and/or exceptions.

	   (iii) perhaps supplies some compactly-described "number" that tells
	      how long to run that procedure before finding the allegedly
	      good theory.

	You are confusing the issues of FINDING a short theory with the
issue of evaluating theories once you have them.  "Hill climbing" will
never be an aspect of the theory itself (hill climbing is not part
of the theory of quantum mechanics) but instead is a description of
how one might go about finding theories.  I remember sending a message
saying that there was no "hill climbing bug" in Solomonoff's work
because he assumed the ability to search the entire space of theories.
You sent a message agreeing but adding that once hueristics were introduced
(as in newtonian mechanics) hill climbing became an issue.  Yet in the
above comments you take a completely different view of hill climbing,
claiming that it might play a role in the theory itself rather than just
in ways of finding the theories.  The work by Solomonoff clearly seperates
the issue of finding theories from the issue of evaluating theories and
only addresses the later.  Who is confused here?


	(The ensemble idea) makes that OCCAM idea seem laughably silly, to me,
	at least in retrospect.

	What do you mean by that "Occam idea".  I consider the notion of
"simpler" or more "elegant" to be a notion a people (including myself)
use but I do not understand exactly what this notion is (perhaps you do).
I find that idea that the notion of "simpler" used by people is exactly
what Solomonoff described to be very strange and to be completely without
psychological motivation, either theoretical or empirical (though you
may be able to enlighten me).

(I am also not intimidated by words such as "laughable").

	David Mc

∂19-Jan-83  1620	ISAACSON at USC-ISI 	More on "O&R" machines  
Date: 19 Jan 1983 1503-PST
Sender: ISAACSON at USC-ISI
Subject: More on "O&R" machines
From: ISAACSON at USC-ISI
To: PHIL-SCI at MIT-MC
Cc: isaacson at USC-ISI
Message-ID: <[USC-ISI]19-Jan-83 15:03:39.ISAACSON>

I regret that I was not able to get either JMC or Minsky to agree
to stick by their apparent definition of "super-intelligence" in
terms of "Obstacles-and-Roofs"/Nobelists type models.
Nevertheless, I wish to pursue some consequences of such a
definition.

I think that this definition provides an interesting perspective
on the nature of intelligence, and may shift the focus some more
to problems of this sort.  Namely, the problem is subdivided into
three subproblems -

1. Characterize the class of "O&R" type models, "fantomark
patterns", etc.

2. Say something about binary strings emitted from members of
that class.3.  Characterize the *FAMILY of inferential processes*
that link such binary strings with members of that class.

I think that No.  3 is the core of the problem.  Things like
"abductive inference" belong in there, along with a lot of other,
more generic, things we can call "epistemogenic processes".

A given such inferential process may yield MORE than one member
of that O&R class.  However, the collection of all such models,
linkable to a given binary string by the given inferential
process, are "isomorphic" from the point of view of the
"epistemic subject" [Piagetese for the "center of intelligent
activity" - human or machine]employing the given inferential
process.  However, from a detached point of view, independent of
the epistemic subject, no particular "isomorphism" need be
apparent.  [Something hinting "deep structures" and "surface
structures" is at play here].

Now, if there are more than one such inferential processes,
employed by the same epistemic subject, or, by *different*
epistemic subjects, it is clear that the number of possible "O&R"
models linkable to a given binary string gets to be quite large,
and substantive differences (not reduceable to "isomorphisms")
may show among models linked by different inferential processes!

It is not clear that one-and-only-one model can be singled out
from such an ensemble by some "absolute" criterion or criteria.
Even if that were possible, the selection criteria are not
trivial, in my opinion.  I suspect that some longer-range
"utility" or global "efficiency" [i.e., survival of the epistemic
subject] may supersede simple-minded "simplicity".

Please don't ask me how all of this fits into the discussion.  I
think it supports some recent hints from Gavan, and the
conclusion of Minsky's last message of this afternoon.

-- JDI

∂19-Jan-83  1730	John Batali <Batali at MIT-OZ at MIT-MC> 	Solomonov    
Date: Wednesday, 19 January 1983, 19:49-EST
From: John Batali <Batali at MIT-OZ at MIT-MC>
Subject: Solomonov
To: MINSKY at MIT-MC, DAM at MIT-OZ at MIT-MC, JMC at SU-AI
Cc: Hewitt at MIT-OZ at MIT-MC, phil-sci at MIT-OZ at MIT-MC

    From: MINSKY @ MIT-MC

    As for "similar theories, I should add another, more siubtle point.
    Solomonoff himself observed that a "single theory" might be brittle
    and subject to sudden jumps with regard to reformulations. ( I am not
    even sure that this is a serious concern, but he is.)   So Solomonoff
    proposes that the way to do induction is NOT to use only the simplest
    - that is, the shortest - description of the data.  Instead he considers
    the ensemble of procedures that "account" for the data, and then
    weights their predictions inversely with their lengths.  (The
    weighting, he argues, corresponds to 2**(- length).  This appears to
    give the method mor stability.

This is the approach that is done in great depth in Willis's paper.
Willis is concerned with a sequence of bits coming from somewhere.
Given a finite initial string of bits, what is the probability that the
next bit is a one?  Willis first shows that the probability is well
defined (it is!), and then considers ways to compute it.  The way it
works, as Marvin describes, is to compute the sum of 2**(- length) of
Turing machines that have the same initial sequence as the given bit
string, and whose next bit is one.  Willis worries appropriately about
making sure to define this with respect to programs that actually halt
in a fixed number of steps and the "actual" probability is the limit of
the ratio as the number of steps goes to infinity.

I wonder how scientists could actually use this theory without actually
having the limit sum.  For example, should we "pay more attention" to
shorter theories, but not ignore longer ones?  Suppose that there is one
short theory that says one thing and a buncha longer ones that say
something opposite.  At what point do we know that the weighted sum of
the longer ones is larger than the shorter one?  As I mentioned before,
I think that the theory might be best used in a robot as a vague high
level goal, so that it actually deliberates about issues like which is
shorter and so on -- rather than actually trying to compute
probabilities.

I agree with Marvin that having an "ensemble" of theories is probably
the right thing.  It is almost certainly how we deal with everyday
thinking.  One could have a set of theories about "games" for example,
each applicable only in certain situations, with no one claimed to be
"the right one".  I think that the meanings of most words work this way.

∂19-Jan-83  2008	MINSKY @ MIT-MC 	Solomonov    
Date: Wednesday, 19 January 1983  22:53-EST
Sender: MINSKY @ MIT-OZ
From: MINSKY @ MIT-MC
To:   John Batali <Batali @ MIT-OZ>
Cc:   DAM @ MIT-OZ, Hewitt @ MIT-OZ, JMC @ SU-AI, phil-sci @ MIT-OZ
Subject: Solomonov

	I wonder how scientists could actually use this theory without
	actually having the limit sum.  For example, should we "pay
	more attention" to shorter theories, but not ignore longer
	ones?  Suppose that there is one short theory that says one
	thing and a buncha longer ones that say something opposite.
	At what point do we know that the weighted sum of the longer
	ones is larger than the shorter one?

Well, we don't know exactly how to use this theory without the full
calculation.  At the moment I regard it as philosophically
illuminating.  In particular, whenever we see a common sense theory
like, e.g., "the best theory is the simplest one we can find" then we
can ask: "is this an interesting approximation to the
Solomonoff-Chaitin-Willis paradigm".  Then we see that it is, because
it doesn't assert that (i) it is the simplest one possible in all
language reformulations or (ii) it doesn't try to weight many
different ones, etc.

IN the sense that we can't exactly calculate the S-C-W predictions,
the theory can thus still be valuable.  Newton's lwas are fine even if
we can't use them in the original form to calculate galactic dynamics,
because we can use them in other ways - e.g., by proving metatheorems
or by using approximations.  Perhaps if the idea were more widely
appreciated, mathematicians would discover some better approximations.
In Part II of the Inductive Inference papers, Solomonoff makes a brave
stab at showing that the canonical theory does indeed yield the
Bernouilli distribution for the tosses of a fair coin.  I have seen no
careful yet imaginative review of his ideas in that paper.

Solomonoff is local, and we could invite him to give his views in a
seminar.  I regard him as one of the truly wise and modest great
thinkers of our time.  He works alone, lives in Harvard Square with a
lovely poetess, unencumbered by any organizational affiliations.

∂20-Jan-83  0227	John McCarthy <JMC@SU-AI> 	Lakatos review, Putnam, and Solomonoff (or even Solomonov)
Date: 19 Jan 83  2321 PST
From: John McCarthy <JMC@SU-AI>
Subject: Lakatos review, Putnam, and Solomonoff (or even Solomonov)
To:   gavan@MIT-OZ, minsky@MIT-OZ
CC:   phil-sci@MIT-OZ  

Subject: Lakatos review, Putnam, and Solomonoff (or even Solomonov)
In reply to: various
I have FTPed it, so you can delete it if you want.  Its flashy rhetorical
style and many references make it difficult for me to determine what its
points amount to.  It seems to me that the Popper, Kuhn and Lakatos
are in disagreement, not so much about what scientists do, but about
how to describe it.  Morevover, none of them seems to be advocating
a change in actual scientific practice.

In principle this should be great for AI, since a big part of our
problem is to describe scientific method to a computer.  However,
I can't find anything at all usable in the review.  It seems like
we have to go back to simple models in which theories of relatively
simple phenomena can be proposed and refuted or tentatively
confirmed.

Putnam is indeed not consistently realist, and therefore I'm inclined
to disagree with him.  One rhetorical question of his on p. 19 of
volume 1 illustrates one place of disagreement.  He is discussing
Russell believing in the genuine existence of the set of all
predicates on natural numbers.  Cantor, believing in this set,
showed that it is non-denumerable which means that its members
can't all individually be named.  Moreover, whatever properties
you attempt to characterize it by in ZF, it turns out that there
are models (denumerable even) that consist of only some of the
subsets that have all the same properties.  (My exposition of
the mathematical situation).  All this leads Putnam to ask the
rhetorical question: "Surely it is reasonable in science to ask
that new technical terms should EVENTUALLY be explained?"

The meta-epistemology viewpoint answers Putnam as follows:  It
would indeed be nice if all technical terms could be explained.
However, it seems that if certain kinds of worlds evolve intelligent
beings, they won't be able to EXPLAIN all the technical terms they
can profitably use - neither in physics nor in mathematics.  From
this point of view Putnam's remark is wishful thinking.  Both science
and mathematics seem to be capable of obtaining only partial
results.  Moreover, an attempt to syntactically limit the language
to those terms that can be EXPLAINED also fails.  As soon as you
give me a rule defining what you regard as acceptable, I will use
this very rule to construct an intuitively meaningful concept
that falls outside your rule.

A remark on Solomonoff and simplicity: I fear that the simplest
theories are not obtainable and can't be verified to be simplest
if obtained except in trivial cases.  From what Marvin says,
Solomonoff is careful about making his criteria often prefer
ensembles of theories.  We can partially confirm his worries about
the most compact theories by recalling Shannon's minimal relay
machines which were utterly incomprehensible, because they relied
on noticing accidental overlaps of function between that permitted
hardware to be shared in unintuitive ways.  I believe that the
ensemble methods may produce theoretically interesting results
and may even suggest practically useful methods, but I don't
believe that an attack at such a general level will produce
useful results.  We need methods that allow us to tell the
machine what we know about the common sense world, e.g. about
space and objects in space.  As an aside: are the Solomonoff
methods directly applicable to the sequences produced by
"obstacles-and-roofs", and if so, what results would Marvin
expect?

Since Solomonoff is an American, his name, though of Russian
derivation, must be spelled as he spells it, especially if you
want to find him in the phone book or his papers in the library.
As for Kolmogorov,
that is the most common American transliteration of the Russian
and is used by the Library of Congress and Mathematical Reviews,
so if you want to find his works, it's what you'd better use.
Other languages have other systems of transliteration, and both
Kolmogoroff and Kolmogorow are sometimes seen.  If Solomonoff
were to move to Russia, God forbid, and publish there, the
American librarians would then call him Solomonov unless they
knew about his previous publications, and then ... .  My friend
Andrei Ershov, spells his name that way when he publishes in
English, but when he publishes in Russian, it is often transliterated
here as Yershov.  The former looks like the Russian, but the
latter is in a system that permits unambiguous (usually) transliteration
back to the Russian alphabet.

∂20-Jan-83  0516	GAVAN @ MIT-MC 	Lakatos review, Putnam. 
Date: Thursday, 20 January 1983  08:12-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   John McCarthy <JMC @ SU-AI>
Cc:   minsky @ MIT-OZ, phil-sci @ MIT-OZ
Subject: Lakatos review, Putnam.
In-reply-to: The message of 19 Jan 83  2321 PST from John McCarthy <JMC at SU-AI>

    Date: 19 Jan 83  2321 PST
    From: John McCarthy <JMC at SU-AI>

    Its [Lakatos] flashy rhetorical
    style and many references make it difficult for me to determine what its
    points amount to.  

Brief flame: I have similar problems with the rhetoric and references
of mathematicians, especially those of you and DAM, and these make it
difficult for ME to determine what YOUR points amount to.  This is
neither here nor there.  I only wanted to point out that we EACH use
rhetorical conventions and referential strategies appropriate to the
linguistic communities to which we belong.  I try not to complain that
others use terminology I don't understand (not wanting to proclaim my
ignorance too loudly), although I occasionally seek terminological
clarifications.  I only wish others could do the same.  Anyway, the
rhetoric is mostly Lakatos', since the summary is more a series of
extracts than an abstract.
    
    It seems to me that the Popper, Kuhn and Lakatos
    are in disagreement, not so much about what scientists do, but about
    how to describe it.  Morevover, none of them seems to be advocating
    a change in actual scientific practice.

They disagree in both empirical descriptions (what scientists do) and
normative proscriptions (what scientists SHOULD do).  Each of them, in
their own way, advocates a change in scientific practice (although the
practice Popper advocated in *The Logic of Scientific Discovery* has
already become standard practice).  Lakatos, for instance, makes the
recommendation that emergent research programs be given sufficient
time to develop before they are subject to the swift Popperian
falsificationist sword -- before the modus tollens is applied to the
program's nascent theories.  He suggests 50 years as an appropriate
amount of time to allow a new research program to develop before
requiring that it demonstrate greater predictive power than its
rivals.  Popper, of course, demands an immediate demonstration.  Consider
how relatively new research programs, like cognitive science, would
fare against their rivals, like behaviorism, if the Popperian norm were law.

    In principle this should be great for AI, since a big part of our
    problem is to describe scientific method to a computer.  However,
    I can't find anything at all usable in the review.  

Of course this is not why the discussion came up in the first place.
Is there anything "usable" (with regard to describing scientific
method in a computer) in Solomonoff?  If computer models of scientific
method are the interest here, we should stop limiting the discussion
to induction.

It seems to me that a major point of the recent literature in the
philosophy of science (including Putnam's *Reason, Truth and History*,
which takes Kuhn as its starting point) is that there may not be any
precise algorithm for "the scientific method."  There might instead be
a whole host of methods available to scientists, and this set may be
coextensive with the set of strategies we all share for living in the
world.

    It seems like we have to go back to simple models in which theories 
    of relatively simple phenomena can be proposed and refuted or 
    tentatively confirmed.

How much further than this has AI already progressed?

    Putnam is indeed not consistently realist, and therefore I'm inclined
    to disagree with him.  

He is neither consistently realist nor consistently idealist.  He
describes a metaphysical position, internalism, which resolves the
ancient, unresolvable dispute between realists and idealists by, at
the same time, both accepting and rejecting both positions.  The credo
of the realist is that the world deploys the mind.  The credo of the
idealist is that the mind deploys the world.  The internalist credo is
that the mind and the world jointly make up the mind and the world.
One need not be a consistent realist or a consistent idealist, so long
as one is a consistent internalist (of course real people aren't
consistent anyway -- only ideal people are).

    One rhetorical question of his on p. 19 of
    volume 1 illustrates one place of disagreement.  He is discussing
    Russell believing in the genuine existence of the set of all
    predicates on natural numbers.  Cantor, believing in this set,
    showed that it is non-denumerable which means that its members
    can't all individually be named.  Moreover, whatever properties
    you attempt to characterize it by in ZF, it turns out that there
    are models (denumerable even) that consist of only some of the
    subsets that have all the same properties.  (My exposition of
    the mathematical situation).  All this leads Putnam to ask the
    rhetorical question: "Surely it is reasonable in science to ask
    that new technical terms should EVENTUALLY be explained?"

    The meta-epistemology viewpoint answers Putnam as follows:  It
    would indeed be nice if all technical terms could be explained.
    However, it seems that if certain kinds of worlds evolve intelligent
    beings, they won't be able to EXPLAIN all the technical terms they
    can profitably use - neither in physics nor in mathematics.  From
    this point of view Putnam's remark is wishful thinking.  Both science
    and mathematics seem to be capable of obtaining only partial
    results.  Moreover, an attempt to syntactically limit the language
    to those terms that can be EXPLAINED also fails.  As soon as you
    give me a rule defining what you regard as acceptable, I will use
    this very rule to construct an intuitively meaningful concept
    that falls outside your rule.

If all this is "true," then how can you maintain the correspondence
theory?

Putnam's critique of the correspondence theory is in his work on
linguistic philosophy, with which I'm far more familiar.  Before you
reject his rejection of the correspondence theory (and his support for
the coherence theory), I suggest you have a look at "The Meaning of
`Meaning'" and *Reason, Truth and History*.  Not being a mathematician,
I hesitate to judge his (or your) mathematical philosophy.  I do know
enough about the philosophy of science, and linguistic and social
philosophy, however, to hold a position on the correspondence theory.

∂20-Jan-83  0856	DAM @ MIT-MC 	Solomonoff 
Date: Thursday, 20 January 1983  11:50-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   Batali @ MIT-OZ
cc:   phil-sci @ MIT-OZ
Subject: Solomonoff


	Date: Wednesday, 19 January 1983, 19:49-EST
	From: John Batali <Batali>

	Given a finite initial string of bits, what is the probability that the
	next bit is a one?  Willis first shows that the probability is well
	defined (it is!). ...

	I was wondering if you could explain this a little further.  In
what sense is the probability of the next bit "well defined".  I have
thought about this a little and I see no way to do it in a manner
which is independent of the Solomonoff et. al. stuff.  I am genuinely
curious.

	David Mc

∂20-Jan-83  0931	DAM @ MIT-MC 	Mathematical Terminology  
Date: Thursday, 20 January 1983  12:18-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   GAVAN @ MIT-OZ
cc:   phil-sci @ MIT-OZ
Subject: Mathematical Terminology


	Date: Thursday, 20 January 1983  08:12-EST
	From: GAVAN

	I have similar problems with the rhetoric and references
	of mathematicians, especially those of you and DAM, and these make it
	difficult for ME to determine what YOUR points amount to.

	I think that I owe you an apology with regard to my
belligerent attitude (sometimes I take these discussions far more
personally than I should).  I would however like to state a general
position concerning the role of mathematics and precision in any
discussion.  I believe that mathematically precise theories are ALMOST
ALWAYS benificial because they place certain aspects of the discussion
on much firmer ground.
	Consider the common sense statement that "unsupported bodies
fall".  The precise theory (v=at, d=1/2at**2) is a great advance over
the common sense statement.  This is true even though the precise
theory does not change the qualitative descriptions of falling bodies.
The precise theory can sometimes be verified or refuted, but it can
always be discussed at a much greater level of detail.  I think this
situation is similar to the Solomonoff et. al. complexity theory.
While the theory may not change the qualitative discussion it adds
important detail.  Of course a precise theory can be interesting and
philosophically illuminating without settling the issue.  A precise
theory can be wrong, but in understanding such a theory and then
understanding why it is wrong a great deal of progress can be made.  I
think this is the case with the Solomonoff stuff.
	In spite of the importance of precision however I certainly do
not think that one should RESTRICT all discusions to purely
mathematical issues.  The issues we are usually concerned with are
empirical and fundamentally outside of mathematics and we must keep
this in mind.  Precision however is a very important tool.

	David Mc

∂20-Jan-83  1127	John McCarthy <JMC@SU-AI>
Date: 20 Jan 83  1123 PST
From: John McCarthy <JMC@SU-AI>
To:   gavan@MIT-OZ
CC:   phil-sci@MIT-OZ  

With regard to giving emergent research programs sufficient time,
my group is expecting a visit on February 2 from Colonel Adams of DARPA
and John Machado of the Office of Naval Research.  I don't know whether
they are adherents of Popper, Kuhn or Lakatos in determining how much
time they should give emergent research programs.  More seriously, I
am not aware, perhaps I should be, of research programs that have
retained or lost adherents as a direct result of Popper's advocacy.
Hmm, perhaps the mistaken (in my opinion) abandonment in the 1960s
of much language translation research was a result of Popperian ideology.

In reply to my arguments that the world may well be such that the
truth cannot always be ascertained, you ask why I still uphold the
correspondence theory.  My point was that whether a statement is
true depends on the world and not on the methods available to the
truth seeker for guessing the truth or confirming his guesses.  To
put the matter sharply, if a British museum monkey types "The earth
is round", then it has typed a true sentence of English, while if it
types "The earth is flat" it has typed a false sentence of English.
The monkey's state of mind, if any, is irrelevant.


∂20-Jan-83  1132	GAVAN @ MIT-MC 	Mathematical Terminology
Date: Thursday, 20 January 1983  14:28-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   DAM @ MIT-OZ
Cc:   phil-sci @ MIT-OZ
Subject: Mathematical Terminology
In-reply-to: The message of 20 Jan 1983  12:18-EST from DAM

    Date: Thursday, 20 January 1983  12:18-EST
    From: DAM

    	Date: Thursday, 20 January 1983  08:12-EST
    	From: GAVAN

    	I have similar problems with the rhetoric and references
    	of mathematicians, especially those of you and DAM, and these make it
    	difficult for ME to determine what YOUR points amount to.

    I think that I owe you an apology with regard to my belligerent
    attitude (sometimes I take these discussions far more personally than
    I should).

I accept your apology, but you're not nearly the worst culprit.

    I would however like to state a general
    position concerning the role of mathematics and precision in any
    discussion.  I believe that mathematically precise theories are ALMOST
    ALWAYS benificial because they place certain aspects of the discussion
    on much firmer ground.

I agree, but only when they're relevant.

    	Consider the common sense statement that "unsupported bodies
    fall".  The precise theory (v=at, d=1/2at**2) is a great advance over
    the common sense statement.  This is true even though the precise
    theory does not change the qualitative descriptions of falling bodies.
    The precise theory can sometimes be verified or refuted, but it can
    always be discussed at a much greater level of detail.  I think this
    situation is similar to the Solomonoff et. al. complexity theory.
    While the theory may not change the qualitative discussion it adds
    important detail.  Of course a precise theory can be interesting and
    philosophically illuminating without settling the issue.  A precise
    theory can be wrong, but in understanding such a theory and then
    understanding why it is wrong a great deal of progress can be made.  I
    think this is the case with the Solomonoff stuff.
    	In spite of the importance of precision however I certainly do
    not think that one should RESTRICT all discusions to purely
    mathematical issues.  The issues we are usually concerned with are
    empirical and fundamentally outside of mathematics and we must keep
    this in mind.  Precision however is a very important tool.

I agree with everything you just said.  There are cases, however, when
the desire for precision overrides good sense.  Precise mathematical
theories often do more harm then good when they're misapplied.
Nowhere are they more misapplied than in the social sciences, where
precision often seems to be an implausible goal, at best.  The most
sophisticated of the mathematical modelers in the social sciences (I
sometimes do some of this, so I speak from experience) recognize the
danger in positing a model which does not at the same time take into
account the qualitative, common sense knowledge we have about social
phenomena.  The best of the qualitative modelers (that's really what
they are), such as the historiographers, recognize the utility of math
models of social phenomena.  In short, the best social scientists
realize that it's not one way or the other, it's both ways or it's
garbage.  Unfortunately, the best of both camps are hard to find.
Math modelers typically accuse the historiographers of imprecision and
general flakiness, while historiographers typically accuse the math
modelers of reductionism and anti-humanism.  So much for the
contemporary state of social science.

I also agree with Batali's earlier observations about knowing through
doing.  They remind me of the arguments put forward by Habermas in
*Knowledge and Human Interests*.  And some of JMC's recent messages
also reflect this.  JMC likes to speaks of the "use" of theories.
Rhetorical questions: what is it that we "use" theories for?  What are
we're DOING when we theorize?  What PURPOSE do theories serve?  My
ready-at-hand answer is that theories serve OUR purposes as cultures,
civilizations, societies in-the-world.  We theorize about subjects and
in corners of the world in which there exists real social interest.
Our theories are used to extend our domination over nature.  Now I see
nothing inherently wrong with this, although some naturalists do.  But
I do have my limits.

If the correspondence theory of truth could be contained within the
domains of mathematics and physics, I might not have much problem
swallowing it. I might be able to ignore it.  It would certainly
appear to be a "useful" rule-of-thumb or heuristic to be used in
extending our domination of nature.  The problem is that, as a theory
of truth, the correspondence theory cannot be so contained.  A theory
of truth is a theory of truth.  It's posited as a theory which applies
universally, whenever anyone makes a truth claim.  When the
correspondence theory is posted in the social sciences, however,
notice what happens.  When a believer in the correspondence theory
makes a truth-claim about some social phenomenon, he/she is claiming
to have some independent access to both his/her own mind and to the
world.  He/she is claiming to be able to take a God's-eye view.  This
is not only impossible, it's dangerous.

When applied outside mathematics and the natural sciences, the
correspondence theory becomes less an instrument for the domination of
nature.  When applied in the social sciences it becomes an instrument
for the domination of human beings.

I find a combination of the coherence theory (at the
individual-scientist level of analysis) and the consensus theory (at
the social level of analysis) to be not only more normatively
palatable, but also more descriptively accurate.  I even see how
coherence can connote elegance, and how both coherence and consensus
can apply as theories of truth in mathematics.  Comments?

∂20-Jan-83  1441	John Batali <Batali at MIT-OZ at MIT-MC> 	Solomonoff   
Date: Thursday, 20 January 1983, 17:35-EST
From: John Batali <Batali at MIT-OZ at MIT-MC>
Subject: Solomonoff
To: DAM at MIT-MC, Batali at MIT-OZ at MIT-MC
Cc: phil-sci at MIT-OZ at MIT-MC

    From: DAM @ MIT-MC

	    Date: Wednesday, 19 January 1983, 19:49-EST
	    From: John Batali <Batali>

	    Given a finite initial string of bits, what is the probability that the
	    next bit is a one?  Willis first shows that the probability is well
	    defined (it is!). ...

	    I was wondering if you could explain this a little further.  In
    what sense is the probability of the next bit "well defined".  I have
    thought about this a little and I see no way to do it in a manner
    which is independent of the Solomonoff et. al. stuff.  I am genuinely
    curious.

Actually, I got a bit carried away trying to prove to Marvin that I had
read the relevant papers.  Instead of saying that the probability is
well defined, I ought to have said that it CAN BE well-defined, in the
manner I described.  To see the importance of this consider:

Suppose you are watching a sequence of bits coming down from God or
somewhere and you wonder about the probability of the next bit being
one.  A simple guess would be 1/2.  But some thinking would make you
realize that this assumes that the source of bits is random.  You would
start thinking about the kind of ways that bit sequences could be
generated, randomness being only one of them.  Suppose, for example, you
see a million ones in a row.  Should you say that the probability of the
next bit being one is still 1/2?  This would depend, it would seem, on
the probability that the bit stream is really random.  And a million
ones in a row seems to be pretty good evidence that the stream is NOT
random.

The crucial assumption in all this, I think, is Church's thesis.  In
this context we could take the thesis to posit that all possible sources
of bit sequences can be simulated (in their outputs) by Turing machines.
From there to Solomonoff and Willis is just a matter of deciding on the
weighting factor.

An interesting question, independent of Solomonoff, is this: ought we to
assume that the "world" can be simulated by a Turing machine?  So is the
goal of science to find the (one) description of that machine?

∂20-Jan-83  1454	John Batali <Batali at MIT-OZ at MIT-MC> 	The Social Sciences    
Date: Thursday, 20 January 1983, 17:48-EST
From: John Batali <Batali at MIT-OZ at MIT-MC>
Subject: The Social Sciences
To: GAVAN at MIT-MC, DAM at MIT-OZ at MIT-MC
Cc: phil-sci at MIT-OZ at MIT-MC



I wonder how far the social sciences can get without a good theory of
individuals -- that is: a theory of minds.  Simple social theories seem
to posit simple theories of individuals and then use these theories to
determine facts about society.  For example: assuming that each consumer
tries to maximize utility (or whatever) leads to a derivation of supply
and demand curves.  It seems that these simple theories break down as
the model assumed for the individual proves to be inadequate.

Note that philosophers like Hume and Locke developed quite sophisticated
theories of "human nature" before turning to their "real" interests:
political and social philosophy.

Point: Understanding "the scientific community" is a project in the
social sciences, thus trying to understand minds in terms of scientific
communities might be completely backwards.

(I don't mean to claim that the social sciences ought to stop and wait
for AI to win.  I'm just arguing against one particular instance of
result flow.)

∂20-Jan-83  1551	DAM @ MIT-MC 	Randomness 
Date: Thursday, 20 January 1983  18:07-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   Batali @ MIT-OZ
cc:   phil-sci @ MIT-OZ
Subject: Randomness


	I have always been interested in the notion of probability and
how it relates to our actual experience.  I am especially interested
these days since I am working on a logic of likelyhood with Joe Halpern
of IBM San Jose.  But all of this is beside the point.  I would like to
address a question raised by Batali in regard to the Solomoff notion
of the "likelyhood" that the next bit is one.

	Date: Thursday, 20 January 1983, 17:35-EST
	From: John Batali <Batali>

	The crucial assumption in all this, I think, is Church's thesis.  In
	this context we could take the thesis to posit that all possible
	sources of bit sequences can be simulated (in their outputs) by Turing
	machines.  From there to Solomonoff and Willis is just a matter of
	deciding on the weighting factor.

	An interesting question, independent of Solomonoff, is this: ought we
	to assume that the "world" can be simulated by a Turing machine?  So
	is the goal of science to find the (one) description of that machine?

	Consider a truely random sequence in which each bit has an
independent probability P of being one.  There are two points to make
about such a sequence.  First consider the INFINITE sequence generated
in this way.  The probability that this infinite sequence is an
infinite sequence which is generated by some finite turing machine is
0 (there are uncountably many such infinite sequences and any
countable subset of them has measure zero).  The second point to be
made is that one could take AS A THEORY that the sequence is
independently and randomly destributed with the probability that any
bit is one being P.

	The second point here is the more interesting.  It says that
the notion of probability can be used in particular (single) theories.

	By the way Hewitt's claim that actors can compute uncomputable
functions is based on the existence of arbiters which introduce true
randomness into his programs.  This is equivalent to giving a program
a truely random bit string.  Given such a string a computer can
compute "uncomputable" functions.  Note that the set of functions
"computable" by such random machines is uncountable since the set of
random bit strings is uncountable.  Thus it is clear that some of
these functions will be "uncomputable" because there are only
countably many "computable" functions.

	David Mc

∂20-Jan-83  2054	JCMa@MIT-OZ 	The Social Sciences   
Date: Thursday, 20 January 1983, 23:51-EST
From: JCMa@MIT-OZ
Subject: The Social Sciences
To: Batali@MIT-OZ
Cc: phil-sci@mc
In-reply-to: The message of 20 Jan 83 17:48-EST from John Batali <Batali at MIT-OZ>

    Date: Thursday, 20 January 1983, 17:48-EST
    From: John Batali <Batali at MIT-OZ>
    Subject: The Social Sciences

    I wonder how far the social sciences can get without a good theory of
    individuals -- that is: a theory of minds.    

The are various theories of the individual in social sciences.  The
major distinctions are in ontological and epistemic assumptions.  The
marxists (and economic determinists) argue that societies are not
governed by the decisions of individuals but rather by "objective"
forces in society.  This position could be generalized and reformulated
by claiming that macro outcomes are a function of macro input and that
micro outcomes (a person's decision) are conditioned, circumscribed by
macro states.  Thus, the amount of variation in the macro outcome
arising from individual decisions is minimal, exception when the person
is in a position of great power.  In that case, it is argued that the
person is acting in the interest of some social sectors (unless the
person is irrational) which make it possible for the person to hold
great power.  

    Simple social theories seem to posit simple theories of individuals
    and then use these theories to determine facts about society.  For
    example: assuming that each consumer tries to maximize utility (or
    whatever) leads to a derivation of supply and demand curves.  It
    seems that these simple theories break down as the model assumed for
    the individual proves to be inadequate.

That is "true."  Neo-classical economics is a good example of this.  The
concept of perfect information, and unrestricted factor mobility fall
into this category.  But, this is not the only way such theories fall apart:
frequently, they can be shown to be internally inconsistent, even though one
allows them their assumptions.

    Note that philosophers like Hume and Locke developed quite sophisticated
    theories of "human nature" before turning to their "real" interests:
    political and social philosophy.

These are some of the guys neo-classical economics got their ideas from.

    Point: Understanding "the scientific community" is a project in the
    social sciences, thus trying to understand minds in terms of scientific
    communities might be completely backwards.

Are you suggesting the societies of mind are asocial?  The point of the
comparison lies in the claim that their are structural-procedural
similarities [the systems theory use of "isomorphism," not the
mathematical one] between "society of mind" and a "scientific
community."  Social oganization refers to processes which within the
class of collective interaction.  So, if you want to say that there are
no collective interactions in the "society of mind," then I think you
missing the point:  Identifying the coherent structural-procedural
similarities between the "society of mind" a society of scientists is
part of the project of understand mind through the "societal" metaphor.

The undertaking is useful, presumably, because we don't know much about
the "society of mind" and we are looking for metaphors to give us
epistemic access to the domain.  If you want to claim that social
science has no useful metaphors, you cannot just do it out of hand.  You
must first argue that all paradigms in the social science (of which
there are several) are theoretically incoherent, and therefore
unsuitable for the task.  Shooting down the neo-classical pyschological
assumptions about utilty maximaization does not do anything for you
because that represents, probably, the weakest social science paradigm
around.

The fact of the matter is that we are talking about processes that take
place at different levels of abstraction, and it is conceivable that the
study of both levels can inform an overall theory.  What you want is
criteria for excluding facets of the two abstraction levels from
abductive use.  I expect that their will be some that you can exclude
and others which you cannot.  But, I don't think you can show that all
possible mapping are useless.

∂21-Jan-83  1336	MINSKY @ MIT-MC 	Randomness   
Date: Friday, 21 January 1983  12:09-EST
Sender: MINSKY @ MIT-OZ
From: MINSKY @ MIT-MC
To:   DAM @ MIT-OZ, MINSKY @ MIT-OZ
Cc:   Batali @ MIT-OZ, phil-sci @ MIT-OZ
Subject: Randomness
In-reply-to: The message of 20 Jan 1983  18:07-EST from DAM

    From DAM: 	By the way Hewitt's claim that actors can compute
	uncomputable functions is based on the existence of arbiters
	which introduce true randomness into his programs.  This is
	equivalent to giving a program a truely random bit string.


I wonder if this is argument is contaminated by the nature of that
"random" sequence.  There is an old theorem by Shannon, Moore,
DeLeeuw, and Shapiro that states something like this:

Let a "probabilistic P-oracle" be three states in a finite-state
machine such that the machine will go from the first state to the
secone with probability P and to the third state with probability 1-P.

Let M be a Turing machine containing a "probabilistic P-oracle", and
suppose that M emits a certain infinite sequence S with probability
greater than Zero.  Then, if the number P is computable itself, so is
the sequence S.  (I forget the details; there may be a "with
probability 1" clause somewhere in this.)  That is there is some some
non-probabilistic Turing machine that can compute S.

So Hewitt's argument probably has a flaw in that he is sneaking an
uncomputable number in from another source.  Unless he assumes that
the arbiter uses a computable probability distribution, it is already
clear that a Turing machine with a non-computable input can compute a
non-recursive function.

The idea of the proof is simple.  Suppose that a computer has access
to a source of random bits with probability P of getting a 1.  Then
the computer can discover the digits of the number P itself as
follows.  Just flip about 2**2n coins to find the n-th digit.  Now you
might fear that some digit will be wrong because of sampling errors.
However, for each epsilon, you can take enough samples to get the
error probability below epsilon.  Now, for each digit, choose some
absolutely convergent series of numbers that sums to less than
epsilon, and sample until the chance of the n-th digit being wrong is
less than the n-th summand.  Then you have probability 1-epsilon that
you've got all the digits of the number P.

Now, using P you can simulate the behavior of the probabilistic Turing
machine as well as you like.  A long string of similar arguments then
shows that any output string of M that has non-zero probability can be
computed by this computer with a certain computable input - hence by
some Turing machine.

I sure hope Hewitt is aware of this and stated his result accordingly.

You could argue that a random arbiter has a non-computable
probability, with probability 1, since most numbers are
non-computable.  However that would amount, in my view, to assuming
that the situation takes place, FROM THE VERY START, in a universe not
subject to Church's thesis.  In that case, of course, you don't need
any Turing machines; just connect three arbiters around in a little
network, to get a non-computable bit-string.

∂21-Jan-83  1345	GAVAN @ MIT-MC 	The Social Sciences
Date: Friday, 21 January 1983  15:24-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   John Batali <Batali @ MIT-OZ>
Cc:   DAM @ MIT-OZ, phil-sci @ MIT-OZ
Subject: The Social Sciences
In-reply-to: The message of 20 Jan 1983 17:48-EST from John Batali <Batali>

    Date: Thursday, 20 January 1983, 17:48-EST
    From: John Batali <Batali>
    To:   GAVAN, DAM
    cc:   phil-sci
    Re:   The Social Sciences

    I wonder how far the social sciences can get without a good theory of
    individuals -- that is: a theory of minds.  Simple social theories seem
    to posit simple theories of individuals and then use these theories to
    determine facts about society.  

Not to determine FACTS, but to determine NORMS.  These are subsequently used
in the interpretation of facts.

    For example: assuming that each consumer
    tries to maximize utility (or whatever) leads to a derivation of supply
    and demand curves.  It seems that these simple theories break down as
    the model assumed for the individual proves to be inadequate.

Agreed.  But remember that the model assumed for the individual also serves
as the criterion for the legitimacy of regime.

    Note that philosophers like Hume and Locke developed quite sophisticated
    theories of "human nature" before turning to their "real" interests:
    political and social philosophy.

So did Plato.  All social and political philosophies are based upon an
explicit or implicit philosophy of mind.  Hume and Locke developed
normative theories.  The utilitarian theory you described above is
empirical, although there are normative versions of it (G.E. Moore
comes to mind).  In the normative version, the goal of the state is to
maximize the sum of utilities across the whole of society.  John Rawls
presents the best liberal critique in *A Theory of Justice*.

    Point: Understanding "the scientific community" is a project in the
    social sciences, thus trying to understand minds in terms of scientific
    communities might be completely backwards.

Empirical social science is no more dependent upon models of human
nature than are the empirical physical sciences.  This is not to say
that models of human nature play no role in empirical social science,
I'm only suggesting that the have about as much relevance as they do
in all sciences.  The physicist who avows a correspondence theory of
truth, for example, is implying something about his/her theory of
human nature.  And this will affect the outcome of his/her results.

There are circularities in the metaphors here.  Empirical sociological
functionalism (the dominant paradigm in Western (non-Marxian) social
science today) views society as AKO Turing machine (AKO = "a kind
of").  See, for instance, Karl Deutsch (Norbert Weiner's student),
*The Nerves of Government*.  Psychological functionalism, grounded in
psychophilosophical functionalism, views the mind (or "human nature"
if you resolve the mind-body problem correctly) as AKO Turing machine.
Normative social philosophy grounds its notion of legitimacy in the
philosophy of mind (as does the philosophy of science).  Scientific
practice occurs within a society legitimated on the ground of a
philosophy of mind.  The philosophy of mind grounds itself in
empirical observations of actors in society.  And so on and so forth.
I have a paper on this.  Now . . . what about the society of mind?

    (I don't mean to claim that the social sciences ought to stop and wait
    for AI to win.  I'm just arguing against one particular instance of
    result flow.)

Since the metaphors are circular (meta-circular?), I would argue
against this linear view.  I prefer mutual bootstrapping.

∂21-Jan-83  1345	MINSKY @ MIT-MC 	Randomness   
Date: Friday, 21 January 1983  12:14-EST
Sender: MINSKY @ MIT-OZ
From: MINSKY @ MIT-MC
To:   DAM @ MIT-OZ
Cc:   Batali @ MIT-OZ, phil-sci @ MIT-OZ
Subject: Randomness
In-reply-to: The message of 20 Jan 1983  18:07-EST from DAM


DAM: Note that the set of functions "computable" by such random
	machines is uncountable since the set of random bit strings is
	uncountable.  Thus it is clear that some of these functions
	will be "uncomputable" because there are only countably many
	"computable" functions.

DAM is right about that.  In the Shannon et al.  Theorem, they decided
that a "one-time" sequence probably oughn't be considered to be a
"function" because you can't get it again.  So they decided to
consider sequences that had a non-zero probability of occurring.  I
think that their definition was the right thing to do, because
otherwise, in some sense, the sequences really have nothing to do with
the computers.

∂21-Jan-83  1345	GAVAN @ MIT-MC 	The smallest description of the past is the best theory for the future?   
Date: Friday, 21 January 1983  15:47-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   MINSKY @ MIT-OZ
Cc:   phil-sci @ MIT-OZ
Subject: The smallest description of the past is the best theory for the future?
In-reply-to: The message of 19 Jan 1983  14:52-EST from MINSKY

    Why does everyone assume that there has to be just one theory at a time.

Not everyone does.  Those that do make this assumption do so because
they think that there's some sort of correspondence -- that we "copy"
things from a world they think is external.
				

∂21-Jan-83  1348	MINSKY @ MIT-MC 	Learning Meaning  
Date: Friday, 21 January 1983  15:42-EST
Sender: MINSKY @ MIT-OZ
From: MINSKY @ MIT-MC
To:   ISAACSON @ USC-ISI, phil-sci @ OZ
Subject: Learning Meaning
In-reply-to: The message of 17 Jan 1983  15:27-EST from ISAACSON at USC-ISI


	Date: Monday, 17 January 1983  15:27-EST
	From: ISAACSON at USC-ISI
	To:   minsky
	Re:   Learning Meaning

	Received today your papers on "K-Lines" and "Learning Meaning."
	Thank you.It will take me a few days to digest.

	I wish you'd comment on your apparent definition of
	"super-intelligence" in terms of the "Obstacles and
	Roofs"/Nobelists model.  If you, indeed, hold that, it may
	provide an interesting perspective on the nature of intelligence
	and shift the focus some more to attacking problems of that sort.

Can't help because I don't believe in defining the standard words like
"intelligence".  This is because I try to avoid traditional
terminologies because I suspect them of being unproductive.  Thus, I
shy away from "induction" and "abduction", etc. and try to describe
processes I understand, such as those "accumulations" and "uniframing"
and "reformlation" processes in "Learning Meaning".

∂21-Jan-83  1354	GAVAN @ MIT-MC 
Date: Friday, 21 January 1983  14:38-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   John McCarthy <JMC @ SU-AI>
Cc:   phil-sci @ MIT-OZ
In-reply-to: The message of 20 Jan 83  1123 PST from John McCarthy <JMC at SU-AI>

    From: John McCarthy <JMC at SU-AI>

    With regard to giving emergent research programs sufficient time,
    my group is expecting a visit on February 2 from Colonel Adams of DARPA
    and John Machado of the Office of Naval Research.  I don't know whether
    they are adherents of Popper, Kuhn or Lakatos in determining how much
    time they should give emergent research programs.  
    
Why don't you ask them?  While you're at it, ask them what they think of the
role of metaphor and analogy in hypothesis formation.  Chuckle.

    More seriously, I
    am not aware, perhaps I should be, of research programs that have
    retained or lost adherents as a direct result of Popper's advocacy.
    Hmm, perhaps the mistaken (in my opinion) abandonment in the 1960s
    of much language translation research was a result of Popperian ideology.

There you go.

    In reply to my arguments that the world may well be such that the
    truth cannot always be ascertained, you ask why I still uphold the
    correspondence theory.  My point was that whether a statement is
    true depends on the world and not on the methods available to the
    truth seeker for guessing the truth or confirming his guesses.  

What world?  The one that's "really" there or the one you guess is there?

    To put the matter sharply, if a British museum monkey types "The earth
    is round", then it has typed a true sentence of English, while if it
    types "The earth is flat" it has typed a false sentence of English.
    The monkey's state of mind, if any, is irrelevant.

If a monkey randomly types "the earth is round", then he has type as
random a string as "bdr hwt beebdhmet".  "The earth is round" is a
true sentence of English only if there's an English-interpreter around
to interpret it and this interpreter believes the string to represent
something true.  Then the string refers to something true for the
interpreter.  Yes, the monkey's state of mind is irrelevant.  What's
not irrelevant is the state of the mind which is interpreting the
string.  Suppose you counter that the monkey randomly types the string
"the earth is round" in a possible world that had no
English-interpreters.  What then?  Does the statement represent
something true?  For whom?  

∂21-Jan-83  1419	GAVAN @ MIT-MC 	Is there a mathematician in the house? 
Date: Friday, 21 January 1983  17:09-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   phil-sci @ MIT-OZ
Subject: Is there a mathematician in the house?
In-reply-to: The message of 21 Jan 1983  12:09-EST from MINSKY

The other day I sent a long message in response to JMC.  I asked for
an interpretation of a quote by Putnam.  Apparently no one saw it since it
was buried at the end of the message.  I really would like to get comments
on this passage from any mathematicians in the house.

  " . . . [H]uman practice, actual and potential, extends only finitely
  far.  Even if we say we can, we cannot `go on counting forever.'  If
  there are possible divergent extensions of our practice, then there
  are possible divergent extensions of even the natural number sequence
  -- our practice, or our mental representations, etc., do not single
  out a unique `standard model' of the natural number sequence.  We are
  tempted to think they do because we easily shift from `we could go on
  counting' to `an ideal machine could go on counting' (or, `an ideal
  mind could go on counting'); but talk of ideal machines (or minds) is
  very different from talk of actual machines and persons.  Talk of what
  an ideal machine could do is talk WITHIN mathematics, it cannot fix
  the interpretation OF mathematics."

  pp. 68-69, *Reason, Truth and History*

∂21-Jan-83  1548	ISAACSON at USC-ISI 	Re:  Learning Meaning   
Date: 21 Jan 1983 1540-PST
Sender: ISAACSON at USC-ISI
Subject: Re:  Learning Meaning
From: ISAACSON at USC-ISI
To: MINSKY at MIT-MC
Cc: PHIL-SCI at MIT-MC, isaacson at USC-ISI
Message-ID: <[USC-ISI]21-Jan-83 15:40:03.ISAACSON>

In-Reply-To: Your message of Friday, 21 Jan 1983, 15:42-EST

Your point is well-taken.


I've just completed reading "Learning Meaning" and should be
ready to converse in terms of accumulations, uniframing, and
reformulations.

Please consider, though, that some people find nelogisms
difficult to adapt to.  [Remember the furor over "epistemogens"?]
I think that somewhere along the road some bridges ought to be
provided to common terminology, however defective, if one wishes
to connect with a viable critical mass of "agents".

-- JDI


∂21-Jan-83  1720	BAK @ MIT-MC 	Hewitt's claim  
Date: Friday, 21 January 1983  20:14-EST
Sender: BAK @ MIT-OZ
From: BAK @ MIT-MC
To:   MINSKY @ MIT-OZ
Cc:   Batali @ MIT-OZ, DAM @ MIT-OZ, phil-sci @ MIT-OZ
Subject: Hewitt's claim
In-reply-to: The message of 21 Jan 1983  12:09-EST from MINSKY

I think people are misinterpreting Hewitt's claim for actor systems.
I don't have his original mail saved so it may possibly be that it is
confused.  I will give the result that I think he meant:

There exists a nondeterminsitic actor machine that is guaranteed to
terminate, but can be in any of an unbounded number of states,
so-called "unbounded nondeterminism."

The machine is simple to construct.  Let there be an actor called COUNTER
that takes two messages, INCREMENT and STOP.  When COUNTER receives an
INCREMENT message it increments an internal counter and sends itself
another INCREMENT message.  When it receives a STOP message it stops
(or writes its number some place and stops).  We initialize the system
by sending both an INCREMENT and a STOP message.  Because of the way
actor systems are axiomatized, we have no guarantee how many cycles of
INCREMENT messages will be processed before the STOP message, but we
are guaranteed the STOP message will eventually arrive.  Hence the
infinite number of states.  See Will Clinger's thesis for an axiomatization
of actor computations and a rigorous proof.

The only sense that it is emulating a nonrecursive function is by use
of a nonrecursive arbiter, a result also true of nondeterministic
Turing machines with nonrecursive P-oracles.  The space of possible
states of terminated computations is countable.  There are uncountably
many nonterminating computations, but this claim can also be made for
nondeterministic Turing machines.  (Write either a 1 or 0 on each of
an infinite number of squares.)

The reason this result is important is because Dijkstra proved a
theorem that says for any nondeterministic Turing machine that is
guaranteed to halt, it can only halt in a bounded number of states.
His proof goes: Since the alphabet is finite, and the number of
possible states that the finite state machine can be in is finite,
there are finitely many branches the computation can take each step.
Thus we can draw a tree of the possible states the machine can be in.
The root of the tree is the initial state of the machine; the
successors of each point in the tree are the possible states after one
more step by the Turing machine.  Now we know that there can be no
infinite branches in the tree because the computation is guaranteed to
terminate.  By applying Koenig's lemma we know that the number of
leaves of the tree (possible terminating states) is finite.

∂21-Jan-83  1915	MINSKY @ MIT-MC 	Is there a mathematician in the house?
Date: Friday, 21 January 1983  22:07-EST
Sender: MINSKY @ MIT-OZ
From: MINSKY @ MIT-MC
To:   GAVAN @ MIT-OZ
Cc:   phil-sci @ MIT-OZ
Subject: Is there a mathematician in the house?
In-reply-to: The message of 21 Jan 1983  17:09-EST from GAVAN


If Putnam said this:

  "If  there are possible divergent extensions of our practice, then there
  are possible divergent extensions of even the natural number sequence
  -- our practice, or our mental representations, etc., do not single
  out a unique `standard model' of the natural number sequence."

then I will send you all a reply shortly.  I have a theory of why
arithmetic propostions appear to seem "a priori".  I can't imagine
what putnam means in that passage, and I will ask him.  Personally I find it
unthinkable that one could image anything that is a lot like the number system
but is different, e.g., by skipping the number 17 or something like that.

∂21-Jan-83  2152	John McCarthy <JMC@SU-AI> 	correspondence theory  
Date: 21 Jan 83  1848 PST
From: John McCarthy <JMC@SU-AI>
Subject: correspondence theory  
To:   gavan@MIT-OZ
CC:   phil-sci@MIT-OZ  

Subject: correspondence theory
In reply to: Gavan of 1983 jan 21
Gavan: Indeed!  Even if there were no English interpreters, the statement
"The world is round" would still be a true sentence of English.  As far
as I can tell, this would be the position of all the supporters of the
correspondence theory including Tarski.  A sentence in a language
is an abstract object existing mathematically independent of whether
anyone ever interprets it or even exists to interpret it.


∂21-Jan-83  2158	MINSKY @ MIT-MC 	Hewitt's claim    
Date: Saturday, 22 January 1983  00:51-EST
Sender: MINSKY @ MIT-OZ
From: MINSKY @ MIT-MC
To:   BAK @ MIT-OZ
Cc:   Batali @ MIT-OZ, DAM @ MIT-OZ, phil-sci @ MIT-OZ
Subject: Hewitt's claim
In-reply-to: The message of 21 Jan 1983  20:14-EST from BAK


	Because of the way actor systems are axiomatized, we have no
	guarantee how many cycles of INCREMENT messages will be
	processed before the STOP message, but we are guaranteed the
	STOP message will eventually arrive.The only sense that it is
	emulating a nonrecursive function is by use of a nonrecursive
	arbiter, a result also true of nondeterministic Turing
	machines with nonrecursive P-oracles.

Yes.  It seems unlikely to me that you could have anything like a
computable P-oracle that would be guaranteed to halt in unbounded
time.  Only I question the significance of the result at all - because
then the arbiter idea appears to be inconsistent with any computable
model of physics.  (This should be no surprise, since it rejects the
Church-Turing thesis.)

Put it this way: consider a coin-tossing operator that has
three possible values subject to these constraint.
	It can have the value HALT.  Otherwise 
	It has value 0 with probability 1/2, or
	It has value 1 with probability 1/2, - BUT
	It is certain to have the value HALT, eventually.

Now in classical analysis, there is only room for HALT to have, at
each moment, either probability zero or some non-zero value.  In
neither case can we be certain that HALT will ever occur.  Now it may
be that there is some non-standard model of analysis in which this
makes sense - but I wonder if the details have been considered.  The
trouble, as I see it, is that believing on the existence of such
arbiters (even as an axiom) entails other dreadful consequences.  At the
least, I think it implies that in any model of Physics as we know it,
classical or quantum, it assumes some non-computable parameter.  So I
wouldn't treat it as other than a humorous mathematical joke.

This is not to say that it isn't a nice theorem.  However, like other
jokes, it should not be told more than once, and that has been done.

∂21-Jan-83  2211	MINSKY @ MIT-MC 	correspondence theory  
Date: Saturday, 22 January 1983  01:01-EST
Sender: MINSKY @ MIT-OZ
From: MINSKY @ MIT-MC
To:   John McCarthy <JMC @ SU-AI>
Cc:   gavan @ MIT-OZ, phil-sci @ MIT-OZ
Subject: correspondence theory  
In-reply-to: The message of 21 Jan 83  1848 PST from John McCarthy <JMC at SU-AI>


McCarthy: Indeed!  In fact, when I read Tarski's book on semantics, I
didn't know German, and all the examples were propositions like "Die
Mund ist Blau" and things like that, at whose meaning I could only
guess.  I read this when a young student, perhaps as early as high
school, and I believe I missed the point pretty completely.  IN fact,
I seem to recall saying to myself - as perhaps GAVAN did - something
like "Well, I guess he's assuming that every educated person
understands German".

The bottom line, unfortunately, was that I decided that if this was
supposed to be a "theory of semantics" then he must be some kind of a
nut.  What I thought semantics should be was something like Korzybski
was trying to do, only better.  I experienced similar feelings later
when Quine explained to me that "Boston is in Massachusetts" is true
if and only if Boston is in Massachusetts.

∂21-Jan-83  2252	John McCarthy <JMC@SU-AI>
Date: 21 Jan 83  2249 PST
From: John McCarthy <JMC@SU-AI>
To:   minsky@MIT-OZ, gavan@MIT-OZ
CC:   phil-sci@MIT-OZ  

Marvin: Indeed!  My reaction to Tarski's exposition was the same,
except for reading it in English.  The correspondence theory of truth
seems obvious, because it agrees with the common sense notion.  It is
only when someone proposes some other theory or denies that there is
any such thing as a true sentence that Tarski's statements seem other
than tautologous, i.e. that there seems to be a need for a theory of
truth.  Notice, however, that the correspondence theory requires that there be
something objective to correspond to - a physical world that either agrees or
not with the sentences or mathematical objects such as sets that either
do or don't have the properties asserted.  This also is the common
sense view, and hence IS obvious unless challenged.  As I understand it,
coherence theories don't require that there be an objective reality, since
they purport only to relate experiences.  However, we correspondence
theorists consider that the coherence theorists have been unsuccessful
in relating experiences to one another except in so far as they have
allowed external reality to sneak back into their theories.

In the hopes of eliciting some reaction from someone besides GAVAN, who
seems not to believe in objective reality, I will again
advocate meta-epistemology.  We try to get a mathematical
theory of the relation between the strategy of a knowledge
seeker in a world and its success in discovering facts about
the world.  This theory doesn't directly involve conjectures
about the real world, because the worlds studied are abstract
mathematical objects.  The theory would relate the following
things:

1. The structure of the world.  Since we get to postulate the world
or give it any properties we want, it's no mystery to us.

2. The imbedding of the knowledge seeker in the world.  He could be
outside it as in Ed Moore's "Gedanken Experiments with Sequential
Machines" in Automata Studies.  However, it is more interesting
and gives theories more transferable to ourselves if he is built
as part of it, as in the Conway life world.  The imbedding also
includes his input-output relations to the rest of the world.

3. The language used by the knowledge seeker to express his
conjectures about the world.  Let's assume that we have an
interpretation of at least part of this language as expressing
assertions about the world.

4. His philosophy of science - what assertions he considers
meaningful and what he regards as evidence.

5. Finally, what he succeeds in discovering about the world, i.e.
the true sentences he generates in the language we interpret as
expressing assertions about the world.  Some knowledge seekers,
of course, might generate expressions that could be regarded
as assertions about the world in some other language, but we
won't count them.

	The issues that arise include the following:

1. If its language only includes input-output relations, will it
even discover as many input-output relations as someone with
a more liberal philosophy of science?

2. Can we make a good one using the Solomonoff strategies, or rather,
in what kinds of worlds will the Solomonoff strategies be effective?

3. What about mathematics?  Lenat's AM worked with numbers and
properties of numbers, but I don't think it had any richer
ontology - if it was formal enough to be said to have an ontology.

∂21-Jan-83  2305	John McCarthy <JMC@SU-AI>
Date: 21 Jan 83  2258 PST
From: John McCarthy <JMC@SU-AI>
To:   minsky@MIT-OZ
CC:   phil-sci@MIT-OZ 

Marvin: It's to harsh to say that a joke should not be told more than
once.  As Lloyd Shapley said to me when I told him I was taking a
trip to Novosibirsk, "If you haven't got a new lecture, get a new
audience".  T. H. Huxley once said, "Mr. Herbert Spencer's idea of a tragedy
is when a beautiful theory is slain by cruel fact".  When I tried to
track down the precise occasion, I discovered that both he and Spencer
liked the joke so well that they both said it several times.


∂21-Jan-83  2308	BAK @ MIT-MC 	Hewitt's claim  
Date: Saturday, 22 January 1983  01:58-EST
Sender: BAK @ MIT-OZ
From: BAK @ MIT-MC
To:   MINSKY @ MIT-OZ
Cc:   Batali @ MIT-OZ, DAM @ MIT-OZ, phil-sci @ MIT-OZ
Subject: Hewitt's claim
In-reply-to: The message of 22 Jan 1983  00:51-EST from MINSKY

    Yes.  It seems unlikely to me that you could have anything like a
    computable P-oracle that would be guaranteed to halt in unbounded
    time.  Only I question the significance of the result at all - because
    then the arbiter idea appears to be inconsistent with any computable
    model of physics.  (This should be no surprise, since it rejects the
    Church-Turing thesis.)  This is not to say that it isn't a nice
    theorem.  However, like other jokes, it should not be told more than
    once, and that has been done.

I don't agree.  Consider the following machine: We have one U235 atom
in a box with some device that will tell us when that U235 atom
decays.  On top of the box is a clock that we start when we begin the
experiment.  When the U235 atom decays the detector sends a HALT
message to the clock.  With probability 1 the clock will receive a
HALT message and there will be some number on its face.  Now, as I
understand the laws of physics as they are currently formulated, the
time that clock will read is not computable.  Furthermore, the system
exhibits unbounded nondeterminism.  (I realize that we can only prove
it halts with probability 1, but that doesn't destroy the basic
argument).  In other words, physics as it is now conceived violates
the Church-Turing thesis.  I understand that you and Fredkin and maybe
some other people would like to create a physics where this wasn't
true, but at least you shouldn't think of quantum mechanics, as it
currently exists, as being laughable.





∂22-Jan-83  0523	MINSKY @ MIT-MC 	Hewitt's claim    
Date: Saturday, 22 January 1983  08:18-EST
Sender: MINSKY @ MIT-OZ
From: MINSKY @ MIT-MC
To:   BAK @ MIT-OZ
Cc:   Batali @ MIT-OZ, DAM @ MIT-OZ, phil-sci @ MIT-OZ
Subject: Hewitt's claim
In-reply-to: The message of 22 Jan 1983  01:58-EST from BAK


From BAK: When the U235 atom decays the detector sends a HALT message
	to the clock.  With probability 1 the clock will receive a
	HALT message and there will be some number on its face.
	...  Now, as I understand the laws of physics as they are
	currently formulated, the time that clock will read is not
	computable.


No. This is all wrong.  When the clock halts, the number is finite and
hence computable.  What you can't compute is what the number will be.  
All this says is that a probabilistic event is not deterministic.

	 (I realize that we can only prove it halts with probability
	1, but that doesn't destroy the basic argument).

Yes it does.  It means you can't assume there is no infinite sequence,
so your fan theorem proof is wrong.

	I understand that you and Fredkin and maybe some other people
	would like to create a physics where this wasn't true, but at
	least you shouldn't think of quantum mechanics, as it
	currently exists, as being laughable.

Same issue of difference between probabilistic and noncomputable.  I
repeat: the theorem of Moore, DeLeeuw, Shannon and Shapiro simly says
that if the probability numbers are computable, then so is any number
that such a machine can emit with non-zero probability.  It isn't a
question of whether the physics is deterministic, but whether there is
a non-computable  parameter among the systems initial probabilities.

∂22-Jan-83  1031	MINSKY at MIT-OZ at MIT-MC 	A theory.   
Date: 22 Jan 1983 1329-EST
From: MINSKY at MIT-OZ at MIT-MC
Subject: A theory.
To: phil-sci at MIT-OZ at MIT-MC


The message after this is several pages long.  It is my theory of
why it is possible for us to discuss meaning and truth of certain
mathematical-like subjects.  I am cluttering the net with it because
I think it shows how a computational-psychological view refreshes
philosophy, and illustrates the points I have tried to make
before about why I think traditional ideas are wasting everyone's time.

But if you don't think this theory is a revolutionary revelation,
showing the modern way to proceed, please let me know!
-------

∂22-Jan-83  1037	MINSKY at MIT-OZ at MIT-MC 	A theory.   
Date: 22 Jan 1983 1330-EST
From: MINSKY at MIT-OZ at MIT-MC
Subject: A theory.
To: phil-sci at MIT-OZ at MIT-MC


     WHY MATHEMATICS CAN BE KNOWN A PRIORI BY PEOPLE AND MACHINES
                      MARVIN MINSKY,  Jan. 1983

When we say things like

                       "Some Elephants exist."

we understand that such "knowledge" is contingent on learning and
experience.  We can imagine worlds without elephants.  But when we say
things like
                           and Two is FOUR.

we feel such "knowledge" has a different quality.  It seems almost
unthinkable that there could be anything like Two and Four for which
Two plus Two were anything but Four - and nearly unthinkable that
someone might not know this and still be intelligent.  [Note:
Development]

Why do some ideas seem Self-Evident?  Why do some things seem so
obviously true that we find it unthinkable they not be true or that
someone could not know them.  The examples most used by philosophers
are simple instances of mathematics and logic.  [Note: Self] What is
so special about them?

Here is a theory of why Arithmetic seems "a priori".  I will call it
"the theory of sparse formulation".  I cannot recall seeing this
simple theory anywhere in Philosophy.  The basic idea is this:

     When a person or a machine "thinks", we can regard this as
     exploring a space of finite formalisms.

     Thinking machines must explore smaller systems first, because
     larger ones systems are obtained by combining smaller ones
     already found to be useful.

     THESIS: ARITHMETIC IS EQUIVALENT TO THE SIMPLEST FORMALISMS
     THAT EVEN REMOTELY RESEMBLE ARITHMETIC.

     Then any mind inclined toward discovering anything like
     arithmetic will in fact discover arithmetic very early in its
     exploration.

The key to our argument is the observation that ARITHMETIC IS RIGID.
That is, you cannot make small changes in it without making it
immensely more complicated.  When I was a child in school, I once
wondered if there were other things like arithmetic, except with
something other than @I{minus times minus is plus}.  In arithmetic,
two negatives can "cancel", to neutralize one another, like two bads
making a good, two losses a gain.  [Note: signs.]  Could this happen
with more than two things?  Could there be number-like things with
"three signs", that "go three ways" instead of just two?  I tried to
find them, for a few days. but each attempt yielded contradictions
(e.g., One and Two would be the same) or no signs at all.  I gave up.
[Note: mathematics.]  What happens when we try, in fact, to make some
changes in arithmetic?

     We might try to make two different numbers be the same.
     Then we'd get modular arithmetic, or something, which would
     seem very wrong as soon as we add two numbers and get a
     result "smaller" than either.

     A mathematician might invent some "non-standard models" -
     but these would still contain the integers, unchanged.

     In any case, you simply cannot make anything that is
     "similar but locally different". You can't get a system that
     "skips" 4 so that, e.g., 2 + 2 will equal 5.

     If you try to skip 4 you'll probably just change the name
     of 4 to "5".  Your "5" won't be prime, for example, because
     it will be "twice 2".  You can't fix that without making
     "twice" unthinkably disorderly.

     You might consider "the even numbers" to be similar to the
     integers. But I doubt that idea can make useful sense
     without the rest.

It seems, then, that there aren't any other things that resemble
arithmetic - that is, which are not quite the same but just a little
different!  There simply isn't any way to simply leave some numbers
out, or slip some others in.  The subject has a stark and singular
rigidity.  [Note: children].  You can't make holes in it, bend it or
stretch it, or attach things to it.  It seems to stand "there", all or
none.

     -------  THE SPARSENESS OF SIMPLE FORMULATIONS -------

Where, or what, could that place be - the "there" in which those
numbers live?  We can find "operational definitions" for numbers that
use relatively few rules - or, if you prefer, axioms.  One starts with
using anything - or "nothing" - for Zero, and reach the others by any
procedure that lists different things - and numbers spring to life.
In three steps one has Three, in five steps, Five.  There's no way
anything like that could "skip" a number step.  Now consider "the
universe of processes" defined by any recursive language or formalism
or machanical system for building machines out of parts:

            x         x   x  
           x   a     x   x  a x
       x  a     \   x  a     \      x  x
         x x x x A   x   x x  A     x
       x  a x   / \ x     x  / \    x   x 
      xxxx   xx/   \x   x xx/xx \ xxx x  x  
      xxxxx x /xxx /\ xx x /x xxx\ x xxxx xx
      x xxxx A x  A  A x  A  x x  A  xxxx xxxxxx
     xxxxxxx/ \xx/ \/ \xx/ \xxxxx/ \xxxxxxxxxxxxx
     xxxxxxxxxxxxxxxxxxxxxxxxx  A  xxxxxxxxxxxxxx
     xxxxxxxxxxxxxxxxxxxxxxxxxx/ \xxxxxxxxxxxxxxx
     xxxaxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
     xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
     xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
     xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
     xxxxxxxxxxxxxxxxxxx B xxxxxxxxxxxxxxxxxxxxxx
     xxxxxxxxxxxxxxxxxxx/ \xxxxxxxxxxxxxxxxxxxxxx

The formalisms "A" are the ones that yield things like arithmetic, and
hence arithmetic itself - because they are simple.  The "a"s are
fragments of arithmetic that are simpler and don't work by themselves.
Much, much later, there may appear some formalisms, "B" that resemble
arithmetic but are different.  But they are so much more complicated
that they will be discovered, if ever, only by remote accidents.  And
a person who encounters one of those before he finds an "A" - if such
a thing could happen - might indeed have thoughts very alien to ours.
Thus my final argument.  This is in the nature of computational
universes that are based on composition and construction of complex
things from earlier, simpler ones:

     WHEN RELATIVELY SIMPLE PROCESSES PRODUCE SIMILAR
     THINGS, THOSE THINGS ARE, USUALLY, IDENTICAL!


  ----  HOW THE SPARSENESS THEORY EXPLAINS THE "A PRIORI" ----

Sometimes people and philosophers wonder "How can I be sure that you
mean just what I mean.  What if, actually, when we talk, we have
completely different meanings for our words.  What if our two
languages A and B, which seem the same on the surface, mean entirely
different things?"

For all I know, this may happen in certain of our common sub-worlds of
thought.  But the sparseness theory shows why this is exceedingly
improbable for Arithmetic!  To show this, I will need another
assumption - that the people themselves are computationally similar.
That is, I assume that their brains have roughly the same sort of ways
to make new functions from old ones. Then, when two such people both
have in mind something similar to arithmetic, we can be almost
certain, because of Sparseness, that they are in fact both dealing
with the same structure.  This is because there are no two different
such systems, similar but not identical, that are easily reached with
brain-size representations.  And this is why "true communication" is
really possible for us.  Given that are brains are computationally
just moderately similar, there is an excellent chance that we can
communicate perfectly about the simpler things we know.

To be sure, it is exceedingly likely that we can deal so perfectly
with the less essential nuances of mental life, for there is no reason
to suppose that Sparseness applies to such things in interesting ways.
Still, the things that Philosophy deals with are exceedingly simple
and schematic - and to them Sparseness applies with the greatest
force.  So this is why. in Logic and Mathematics, and also to a degree
in Philosophy, it is possible to communicate.  And why, in Logic, and
to some extent, Mathematics as well, the Sparseness is so marked that
we can even agree.

I see no reason one could not go on to make capture this idea in the
technical formalism of theories of the Complexity of Computation.  For
any gioven bound on the initial complexity of a computational systems,
say, based on the state-complexity of a Turing machine and its tape,
we could argue that certain mathematical ideas are almost certain to
occur "a priori" wiht virtually whatever procedures are involved. For
human minds, the same constraints would hold in regard to whatever
bound we might propose in regard to the programming languages that our
genetics predispose us to construct in our infancies, for representing
procedural knowledge.  Thus certain concepts will tend to be
"Trans-cultural"; the ones so so relatively simple, as "ideas" go -
and so useful - that individuals (or cultures) are nearly certain to
encounter them - before similar but different ones - without
cross-contact.

Now we can see why traditional Philosophy was helpless to cope with
the nature of knowledge.  The traditional categories, like "empirical"
vs. "innate" are not adequate to understand this simple situation.  We
have shown that the "self-evident" character of number is at the same
time "empirical" - that is, based on mental experiences - yet, at the
same time, inevitable, hence "a priori" in a sense.  It need not stem
from any @I{a priori mental constraint} that is "built-in" in an
obvious, recognizable fashion.  Instead, it can stem from the
topography of arithmetic itself, from that peculiar sparseness of its
formulations (or Godel numbers) within the vast, weird world of
possibly thinkable processes.

 --------------------------- NOTES ---------------------------

[Note: Development]  Philosophers who talk about @I{a priori} do
not seem aware that young children do not know such things from
the very start.

[Note: children] Perhaps this is a danger to children. In many ways,
it is a very barren world.  Within arithmetic, of course, there is a
universe of things to do; find different ways to compute things, and
different ways to think of them.  But, on that larger scale, there's
nothing that the kid can do to mold and shape the thing to other
purposes.  One has to mold, instead, the ways one sees the world.  Is
arithmetic really a good subject for starting to train children's
minds?

Consider those "sign rules" - like @I{minus times minus is plus} -
which seem almost "self-evident" to mathematical adults.  Few of us
remember our first experiences, in which this was @I{not} obvious at
all, and surely, many of us found it faintly paradoxical, with the
flavor of questions like @I{if you lie about lying, are you telling
the truth?

[Note: mathematics.] Alas, if I'd persisted I might have found the
Gaussian integers, or Hamilton's quaternions, or Pauli's spin
matrices, or something nice like that.  But, then, I wouldn't have
known any uses for them.

[Note: Self] Most philosophical speculations about the @I{a priori} is
inexcusably naive about child development. Mnay things that seem
self-evident to an adult are only so to an adult, hence that says
something about adults, but not so much about "self-evidence".  Is the
"self" in "self-evident" the fact's self or the person's self? We
rarely never remember how we came to believe the most "obvious"
things, because things tend to seem more "obvious" the earlier we
learned them - and hence the less we remember about finding them out.
Unfortunately infants don't talk much about what is self-evident.
-------

∂22-Jan-83  1251	John McCarthy <JMC@SU-AI>
Date: 22 Jan 83  1239 PST
From: John McCarthy <JMC@SU-AI>
To:   minsky@MIT-OZ
CC:   phil-sci@MIT-OZ 

I agree with your theory of sparseness, which resembles the sign posted
in the Durgin-Park restaurant: "There isn't any place anything like this
place anywhere near this place, so this must be the place".  I have called
the theory "the argument from cryptography" in my "Ascribing Mental
Qualities to Machines" paper.  In principle, a cryptogram could have
multiple solutions, but this doesn't happen.  When someone gets an
English text resembling from a cryptogram, it always turns out to be
the text the writer of the cryptogram started with.  There is a slight
counterexample in French where a cryptogram has the two solutions both
making sense, "Le prisonnier est fort, il n'a rien dit" and "Le prisonnier
est fort, il n'a rien dit" which translate to "The prisoner is strong,
he has said nothing" and the "The prisoner is dead, he has said nothing".
The difference is based on a letter occurring once in the cryptogram which
could either be an "f" or an "m".  Philosophers often pose the
possibility that a novel in one language could be a cookbook in
another and therefore suggest that a child needs some a priori built-in
knowledge that leads him to human language and not some other.  However,
the Shannon information theory indicates that a book length text having
two completely different interpretations in two languages is as
improbable as the molecules of air all rushing to one side of the room.
Russian and English share 20 glyphs in their alphabets, but the longest
texts anyone has come up with that could be either Russian or English
have three letters.  Thus the inscription  POT means pot in English and
company (military) (pronounced like the English "rote") in Russian.
However, I don't believe this sparseness argument makes obsolete
traditional treatments of truth, etc.  It's merely another fact to
be taken into account.


∂22-Jan-83  1328	MINSKY @ MIT-MC
Date: Saturday, 22 January 1983  16:05-EST
Sender: MINSKY @ MIT-OZ
From: MINSKY @ MIT-MC
To:   John McCarthy <JMC @ SU-AI>
Cc:   phil-sci @ MIT-OZ, MINSKY @ MIT-OZ
In-reply-to: The message of 22 Jan 83  1239 PST from John McCarthy <JMC at SU-AI>


Oh, good.  IN fact, I'd like to append your message as a note
to this when I make it a memo.  

	However, I don't believe this sparseness argument makes
	obsolete traditional treatments of truth, etc.  It's merely
	another fact to be taken into account.

Well, I wonder.  I don't know the theories of truth so well.  I do
think it makes obsolete a lot of other parts of philosophy concerned
with how we know things, what is innate and A Priori, etc.  Also, if
not how we know what is true, at least about how we choose what things
to think might be true.

But really, I'm most concerned with how we learn things - given that
we are in a world that can support machinery and are made of
machinery.  (We appear to share that prejudice.)  And I think a lot of
philosophy is just bad - e.g., all the stuff on "know" vs. "believe"
and work on "induction" and on "justification" and even on
"refutation".  The reason is that I think the idea of "know" vs.
"Pretty sure" is just plain bad psychology.  The "sparesness" or
"cryptographic" theory affects philosophy because (I think) it
explains why some opinions or beliefs are so secure - for reasons of
mathematical topology - that they appear to be "true" or "known"
rather than "plausible" or "believed".

∂22-Jan-83  1425	DAM @ MIT-MC 	Hewitt's claim and Church' thesis   
Date: Saturday, 22 January 1983  17:15-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   phil-sci @ MIT-OZ
Subject: Hewitt's claim and Church' thesis


	There is definitely a contrast between the the following
two results:

1) For any non-deterministic turing machine M, if M is guarateed
to halt (i.e. if M's computation tree is of finite depth) then there is some
bound B(M) such that every possible output by of M is less than B(M).

2) There are physical processes which are guarenteed to halt (halt with
probability 1 but have infinite computation tree) which have no bound
on there output.

	These results are clearly not in contradiction as long as
one is very explicit about the notion of "guarenteed to halt".  If
one considers a non-deterministic turing machine to be a probabalistic
system (it's choices have associated probabilities) then there are lots
of machines with infinite computation trees (and thus are not guaranteed
to halt in the first sense) but halt with probability 1 (and are guaranteed
to halt in the second sense).  I find Hewitt's claim sort of obvious
once it is understood in this way and furthermore I find it fairly
uninteresting in the sense that it does not detract from the former result
as he seems to claim (there are lots of abvious BUT INTERESTING results,
this is just not one of them).

	David Mc

∂22-Jan-83  1438	BAK @ MIT-MC 	Hewitt's claim  
Date: Saturday, 22 January 1983  17:19-EST
Sender: BAK @ MIT-OZ
From: BAK @ MIT-MC
To:   MINSKY @ MIT-OZ
Cc:   Batali @ MIT-OZ, DAM @ MIT-OZ, phil-sci @ MIT-OZ
Subject: Hewitt's claim

    Date: Saturday, 22 January 1983  08:18-EST
    From: MINSKY
    Sender: MINSKY
    To:   BAK

    No. This is all wrong.  When the clock halts, the number is finite and
    hence computable.  What you can't compute is what the number will be.  
    All this says is that a probabilistic event is not deterministic.

You're completely right about this and I realize that I spoke too
hastily.  The fan argument, by the way, is Dijkstra's and not
Hewitt's.  The reason actor semantics is useful is that it allows
one to prove results about deadlock or starvation-freeness for
distributed systems.  It seems to me to be a perfectly reasonable
model for macroscopic distributed systems, even if we have to tag
every result with a "with probability 1".  It has no relevance
whatever to recursive function theory or physics.

∂22-Jan-83  1724	DAM @ MIT-MC 	Objectivity in Mathematics (Minsky's Theory)  
Date: Saturday, 22 January 1983  19:57-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   phil-sci @ MIT-OZ
Subject: Objectivity in Mathematics (Minsky's Theory)


		Minsky's Theory Misses The Point

	I agree that one should consider some sort of "search space"
of possible "ideas" or "computational systems".  I have also been
convinced by Minsky's message that this space seems sparse so that the
idea of "the numbers" has no near neighbors.  I further agree that
this is probably the reason that most people have exactly the same
notion of number.  However I think that Minsky has missed the point of
those who claim that mathematics is a-priori, objective, or innate.
The point is not that the statement "one plus one is two" is objective
and a-priori but instead that this statement FOLLOWS FROM CERTAIN
DEFINITIONS, and it is the entailment between the definitions and
statement which is objective and a-priori.  Children can usually not
understand sophisticated definitions and adults must relearn much of
there mathematical knowledge before they understand it as being purely
definitional or tautological.  Much of the history of mathematics
involves a gradual shift from empirical notions to purely definitional
statements and the criticisms of Lakatos are based, at least in part,
on mathematical enterprises which were not yet purely definitional.

	 The Nature Of The Search Space Is Important

	There is no single unambiuous notion of "a search space of
ideas" and the details of the search space may be important for
"learning" and "understanding".  Minsky plays down the importance of
the nature of the search space.  He seems to view it as a space of
"computational systems" or possibly of turing programs, or perhaps as
a space of programs for some simple universal parallel machine.  He is
not very specific about exactly what this search space.  Thus Minsky
seems to feel that his conjectures about this space are somewhat
independent of these details.  I feel otherwise.
	I think that the search space of "ideas" is highly constrained
and sophisticated and that the a-priori definitional tautologies of
mathematics are somehow related to the nature of this search space.
Not all computational systems need be organized around descrete
statements.  Yet it seems (empirically) that human thought universally
involves such statements.
	A constrained search space is not a "smaller" search space.
It is important to realize that any plausible search space of ideas
will be infinite.  Furthermore computer science teaches us that almost
any system is universal.  Thus any plausible search space of ideas
will be both infinite and "universal".  However search spaces may
differ in their "sparseness" or in the degree to which they facilitate
"learning" or "understanding".  It seems to me that a search space
of partial definitions is more plausible than a search space of computer
programs.  In fact Minsky's examples involve definitions not programs.


	I think definitional tautologies are objective (and innate)
and should be interesting to AI researchers for precisely this reason.
If there is a rich sophisticated innate structure it seems it must be
there for some reason (though I am less sure of the reason for
mathematics than I am of its objective existence).  Would Minsky argue
that "two plus two is four" does not follow from Piano's axioms?

	David Mc

∂22-Jan-83  1907	MINSKY @ MIT-MC 	Objectivity in Mathematics (Minsky's Theory)    
Date: Saturday, 22 January 1983  22:02-EST
Sender: MINSKY @ MIT-OZ
From: MINSKY @ MIT-MC
To:   DAM @ MIT-OZ
Cc:   phil-sci @ MIT-OZ
Subject: Objectivity in Mathematics (Minsky's Theory)
In-reply-to: The message of 22 Jan 1983  19:57-EST from DAM


I haven't studied the ideas in your message carefully, but I will.  In
the meantime, my first reaction is that there is no basis in supposing
that the entailments between definitions and statements is anywhere
near as a priori as you suggest.  To be sure, children must
reformulate other kinds of belief systems - but they also must
reformulate their entailment systems, as Piaget shows in so many ways.

Most adults regularly use wrong entailment procedures, e.g,, like "If
most A's are B's and Most B's are C's then most (or many) A's are C's.
Is it all right for A Priori stuff to be wrong?

Gulp: I didn't say anywhere that the constrained spaces are smaller
(though they probably are).  My whole point was that NO MATTER WHAT
THE PROCEDURE, IT MUST EXAMINE "SIMPLER" SCHEMES EARLIER so the
sparseness thing IS pretty much independent of details.

I think that one arrives at Two and Two is four in lots of ways.
People did it for millenia before Peano's axioms, whatever they are.
Would DAM argue that those particular axioms are the one's normal
humans use?

∂22-Jan-83  1917	William A. Kornfeld <BAK at MIT-OZ at MIT-MC> 	Sparseness theory 
Date: Saturday, 22 January 1983, 21:14-EST
From: William A. Kornfeld <BAK at MIT-OZ at MIT-MC>
Subject: Sparseness theory
To: minsky at MIT-OZ at MIT-MC
Cc: phil-sci at MIT-OZ at MIT-MC

 
I like your sparseness theory alot.  In fact, what seems to make alot of
mathematics interesting is that when you try to extend structures that
you already have by filling in obvious holes there is often only one
way to do it.

Consider your example with negative numbers.  If you start with the
natural numbers (0,1,...) and + so that this structure satisfies the
unique successor property, + is a total function NxN->N, commutativity,
associativity, 0+X=X, and when n+X=m X is unique if it exists.  This is
what you think numbers are in grade school.  Now its very reasonable to
want there to always be a solution to all such equations: n+X=m.  Its
easy to show that any extension to the natural numbers that form an
abelian group under + must contain a subset isomorphic to the integers.
Your minus-cancelling rules follow by making this structure a
ring:

(-k)*(-n) = X
k*(-n) + (-k)*(-n) = k*(-n) + X
(k + -k) * -n = (k * -n) + X
0 = (k * -n) + X
k*n = k*n + (k * -n) + X
k*n = X

Similarly, we are forced to believe in the rationals because they must
exist if we are to solve all equations of the form A * X = B

We are forced to believe in the complex algebraic numbers because we
want to solve any equation expressed with + and *.

The fact that each of these extensions is unique gives a sense of
importance to these results.  Perhaps this is a theory of interesting
vs. uninteresting mathematics or why mathematics itself is interesting.
However, I think this is separable from considerations of a theory of
truth.  It seems to me that any creature that accepted the field axioms
and that called the multiplicative identity 1 and its inverse -1 would
be forced to conclude that -1 * -1 = 1.  If it didn't then we would have
to assume that it was either working from something other than the field
axioms, or that our mathematics has a bug and isn't consistent (and that
we would be capable of understanding the bug if pointed out to us), or
that it was in error.

∂23-Jan-83  0124	John McCarthy <JMC@SU-AI>
Date: 23 Jan 83  0114 PST
From: John McCarthy <JMC@SU-AI>
To:   phil-sci@MIT-OZ  

	Reading the article "The coherence theory of truth" in the
Encyclopedia of Philosophy and recalling Marvin's remarks about
sparsenes leads me to change somewhat my views expressed previously.
The article emphasizes the fact that we don't verify single statements,
but whole complexes of statements, i.e. theories and languages.
Moreover, within such a theory it is often, and perhaps necessarily,
unclear which statements are unadulterated observations, which are
definitions, and which are theoretical statements whose truth may
be verified.

	All this doesn't interfere with the correspondence theory
of truth as applied to single statements in a given language such
as English.  Nor does the fact that theories are verified as wholes,
rather than statement by statement, necessarily affect a viewpoint
to which truth has little to do with the method of verification.
However, it suggest we can do better.

	Suppose someone supplies us with a 50,000 word textbook on Newtonian
mechanics written in Martian.  Suppose that there are some errors
in the text and moreover we don't know the subject matter, there
are no diagrams, and we can't read Martian.  It may be very difficult
and take a long time to figure out what the document is.
Someone may initially advance the theory that the document is a cookbook
or a novel.  Nevertheless, it is extremely improbable, e.g. of the
order of the molecules rushing to the other side of the room, that
this textbook admits any coherent interpretation than as a textbook
of Newtonian mechanics with a few errors.  Whether cryptography
and linguistics are presently up to guessing it, I don't know.
Remember that the still undeciphered inscriptions in unknown
languages are almost certainly not narratives or exposition but
mostly lists, whether of kings or the contents of warehouses.

	What this suggests is that a long enough document may
have certain statements in it true or false in an absolute,
language independent sense.  Of course, it may also assert
a myth in the same absolute sene of admitting that and no other
interpretation.  Some of the background for these assertions
is in Shannon's 1948 Bell System Technical Journal article
on the probabilistics of cryptography.  These ideas may be
combined with those of Solomonoff, Kolmogorov and Chaitin.

∂23-Jan-83  0410	GAVAN @ MIT-MC 	correspondence theory   
Date: Sunday, 23 January 1983  07:06-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   John McCarthy <JMC @ SU-AI>
Cc:   phil-sci @ MIT-OZ
Subject: correspondence theory  
In-reply-to: The message of 21 Jan 83  1848 PST from John McCarthy <JMC at SU-AI>

    Date: 21 Jan 83  1848 PST
    From: John McCarthy <JMC at SU-AI>
    To:   gavan
    cc:   phil-sci at MIT-OZ
    Re:   correspondence theory  

    Subject: correspondence theory
    In reply to: Gavan of 1983 jan 21
    Gavan: Indeed!  Even if there were no English interpreters, the statement
    "The world is round" would still be a true sentence of English.  As far
    as I can tell, this would be the position of all the supporters of the
    correspondence theory including Tarski.  A sentence in a language
    is an abstract object existing mathematically independent of whether
    anyone ever interprets it or even exists to interpret it.

What, then, is an "abstract object"?  Do any of these exist
independent of an interpreter?  Does mathematics exist independent of
an interpreter?  How do you know?  Also, do you believe that all
sentences in all languages can be transformed into mathematical
expressions?

If there were no English interpreters, the statement "the world is
round" would be as random a string as any other typed out by your
British Museum monkey.

∂23-Jan-83  0515	GAVAN @ MIT-MC 
Date: Sunday, 23 January 1983  08:06-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   John McCarthy <JMC @ SU-AI>
Cc:   minsky @ MIT-OZ, phil-sci @ MIT-OZ
In-reply-to: The message of 21 Jan 83  2249 PST from John McCarthy <JMC at SU-AI>

    Date: 21 Jan 83  2249 PST
    From: John McCarthy <JMC at SU-AI>
    To:   minsky, gavan
    cc:   phil-sci at MIT-OZ

    Marvin: Indeed!  My reaction to Tarski's exposition was the same,
    except for reading it in English.  The correspondence theory of truth
    seems obvious, because it agrees with the common sense notion.  It is
    only when someone proposes some other theory or denies that there is
    any such thing as a true sentence that Tarski's statements seem other
    than tautologous, i.e. that there seems to be a need for a theory of
    truth.  Notice, however, that the correspondence theory requires that there be
    something objective to correspond to - a physical world that either agrees or
    not with the sentences or mathematical objects such as sets that either
    do or don't have the properties asserted.  This also is the common
    sense view, and hence IS obvious unless challenged.  

Consider it challenged.  A problem is that what's in the sets is
dependent upon the observer who so classifies the objects.  Who is to
independently verify whether some scientist's set-classification
algorithm (the intension) corresponds to the set which includes the
objects with the asserted properties (the extension)?  Well, maybe God
can do this, but no one else.  If anyone else does it, then he/she is
only comparing his/her extension with what he/she believes is the
extension of the scientist's intension.  Would that get us anything
useful?  I don't think so.

    As I understand it, coherence theories don't require that there be
    an objective reality, since they purport only to relate experiences.

Yes, objective reality is not REQUIRED by coherence theories.  But,
then again, it's not PROHIBITED either.  But you can ASSUME the
existence of an objective reality.  You can't prove it exists, as the
ancient sceptics argued.  You also can't say that your perception of
reality is THE objective reality, since you're merely a subject, like
the rest of us.  

    However, we correspondence
    theorists consider that the coherence theorists have been unsuccessful
    in relating experiences to one another except in so far as they have
    allowed external reality to sneak back into their theories.

A coherence theorist need not deny the existence of "reality."  Some
idealists do this, but then a coherence theorist need not be an
idealist.  He/she might be what Putnam calls an "internalist."  An
internalist does not deny the exist of reality.  He/she just denies
the "externality" of reality.

    In the hopes of eliciting some reaction from someone besides GAVAN, who
    seems not to believe in objective reality, I will again
    advocate meta-epistemology.  

I certainly DO "believe in" reality.  But I don't presume to take this
belief as anything more than an article of faith on my part.  I feel
that if I did, I'd be setting myself up as God (who of course, if
he/she exists, can be the only being with a meta-epistemology).  The
only access I have to reality is the access I have by way of my
beliefs.  My beliefs are often confounded by other factors, such as my
desires and the language games I play, so much so that I cannot allow
myself to belief that the reality I "believe in" is an "objective"
one.  I don't simply "copy" the world, so I don't presume that what
can be taken for a correspondence is "really" a correspondence.
There's always a residue of doubt, no matter how sure I am (at least I
think so).  No OBJECTIVE reality can be known by any subject, only
SUBJECTIVE reality can (unless you can show me a foolproof way to be
objective).

    We try to get a mathematical
    theory of the relation between the strategy of a knowledge
    seeker in a world and its success in discovering facts about
    the world.  This theory doesn't directly involve conjectures
    about the real world, because the worlds studied are abstract
    mathematical objects. . . .

As I said earlier, this project might prove useful for some purpose
within mathematics, but I can't see how it could be applied to
anything more empirical than that.  Do you?  If so, how could you
interpret your results to have meaning outside the domain of pure
mathematics?  How could it be applied, say, to the social sciences?

∂23-Jan-83  1125	DAM @ MIT-MC 	Minsky's Theory 
Date: Sunday, 23 January 1983  13:44-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   MINSKY @ MIT-OZ
cc:   phil-sci @ MIT-OZ
Subject: Minsky's Theory


	Date: Saturday, 22 January 1983  22:02-EST
	From: MINSKY

	Most adults regularly use wrong entailment procedures, e.g,, like "If
	most A's are B's and Most B's are C's then most (or many) A's are C's.
	Is it all right for A Priori stuff to be wrong?

	It is often very difficult to seperate "using a wrong entailment
procedure" from "believing a false generalization".  These people may
just believe wrong things.  Furthermore the notion of a definition or
purely hypothetical situation is sometimes hard to communicate.  People
tend to assume that any discussion is about the real world (it took
me a long time to accept the idea that one could talk about PURELY
HYPOTHETICAL worlds).
	The claim that mathematics is a-priori and objective is a
claim about the truths which follow from definitions and assumptions.
If it is really the case in a HYPOTHETICAL world that ALL birds fly,
and Fred is a bird in that world, then Fred flys.  Would you deny
this?  If not what is the explanation for the a-priori objective
nature of this claim?  ALL of mathematics seems just as undeniable
as this simple syllogism.

	David Mc

∂23-Jan-83  1128	DAM @ MIT-MC 	Minsky's Theory 
Date: Sunday, 23 January 1983  14:01-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   Minsky @ MIT-OZ
cc:   phil-sci @ MIT-OZ
Subject: Minsky's Theory


	Date: Saturday, 22 January 1983  22:02-EST
	From: MINSKY

	Gulp: I didn't say anywhere that the constrained spaces are smaller
	(though they probably are).  My whole point was that NO MATTER WHAT
	THE PROCEDURE, IT MUST EXAMINE "SIMPLER" SCHEMES EARLIER so the
	sparseness thing IS pretty much independent of details.

	Sorry, I didn't mean to imply that you did say constrained
spaces were smaller (and I think they are really not).  Even if the
sparseness claim is indepent of the search space (and I doubt this)
the "effectiveness" of a given search space may still depend on the
details of the space (and I suspect the dependence is strong).

	I think that one arrives at Two and Two is four in lots of ways.
	People did it for millenia before Peano's axioms, whatever they are.
	Would DAM argue that those particular axioms are the one's normal
	humans use?

	Whether or not people use Peano's axioms is not the issue.  The
issue is whether "two plus two is four" follows from Peano's axioms.

	Of course people understand numbers long before they understand
any explicit statement of Peano's axioms (since people already understand
numbers explicit axioms are redundant anyway).  But what is the internal
representation of the notion of "number"?  We think about numbers all
the time but we have no introspective access to the data structures
which "define" or "describe" or "determine" what we mean by "a whole number".
I do not claim that these data structures are precisely Peano's axioms
but I wouldn't be surprised if the data structure was some "definition"
in some "internal sentential language".

	David Mc

∂23-Jan-83  1149	DAM @ MIT-MC 	Corrospondence Theory
Date: Sunday, 23 January 1983  14:38-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   JMC @ SU-AI
cc:   Phil-sci @ MIT-OZ
Subject: Corrospondence Theory


	Date: 21 Jan 83  2249 PST
	From: John McCarthy <JMC at SU-AI>

	In the hopes of eliciting some reaction from someone besides GAVAN, who
	seems not to believe in objective reality, I will again
	advocate meta-epistemology.

	I tend to agree with Gavan in that "I do not believe in" a
single objective reality.  I do believe in objective empirical truth
accessible through sense data via Occam's razor.  Furthermore I think
that the corrospondence theory of truth is an extremely important
paradigm and analytic tool for epistemology.  I plan to give a
detailed response to your message after the discussion of the
objectivity of mathematics has died down a bit.

	David McAllester

∂23-Jan-83  1347	ISAACSON at USC-ISI 	Re:  Minsky's Theory    
Date: 23 Jan 1983 1303-PST
Sender: ISAACSON at USC-ISI
Subject: Re:  Minsky's Theory
From: ISAACSON at USC-ISI
To: DAM at MIT-MC
Cc: phil-sci at MIT-MC, isaacson at USC-ISI
Message-ID: <[USC-ISI]23-Jan-83 13:03:39.ISAACSON>
Redistributed-To: phil-sci at MIT-MC
Redistributed-By: ISAACSON at USC-ISI
Redistributed-Date: 23 Jan 1983


In-Reply-To: Your message of Sunday, 23 Jan 1983, 13:44-EST


"If it is really the case in a HYPOTHETICAL world that ALL birds
fly, and Fred is a bird in that world, then Fred flys.  Would you
deny this?"


It ain't that clear-cut, I think.  The mixture of ordinary
English, the syllogistic template, and notions of a priori truth
may be misleading.  For example, by saying that "ALL birds fly"
do you also postulate that in your world birds can't be sick,
crippled, newly-hatched, and what not?  What if Fred HAS a
permannently broken wing?


Yes, I know, you're interested in the syllogistic chain proper,
while suppressing whatever ordinary connotation your statements
may convey to the bird ethologist.  But this may be confusing to
many, unsuspecting, good souls.  Besides, I think that Konrad
Lorenz's world is far more relevant to AI-type worlds than your
type of barren syllogistic domains.


Worse yet.  Here is another (hypothetical) world constructed to
suit the very same syllogistic template -

In a HYPOTHETICAL village:

Mr.  Beardman shaves ALL persons that do not shave themselves.

Mr.  Beardman lives in said village.

Then Mr.  Beardman...  Ooops...  shaves/does not shave (???)
himself.


-- JDI


∂23-Jan-83  1840	MONTALVO@HP-HULK@HP-VENUS@RAND-RELAY 	Re: Summaries, please ...  
Return-Path: <MONTALVO@HP-HULK@HP-VENUS@RAND-RELAY.HP-Labs@Rand-Relay>
Date: 21 Jan 1983 1527-PST
From: MONTALVO@HP-HULK@HP-VENUS@RAND-RELAY
Subject: Re: Summaries, please ...
To: PHIL-SCI@MIT-MC
,
Cc: ISAACSON@USC-ISI, MONTALVO@HP-HULK, MONTALVO@HP-HULK@HP-VENUS@@@HP-labs
        HP-VENUS@RAND-RELAY
Reply-To: FSM.HP-HULK at UDEL-RELAY
In-Reply-To: Your message of 18-Jan-83 2112-PST
Via:  HP-Labs; 21 Jan 83 20:00-PDT

I'd like to second that motion.  The whole mailing is getting just
too unwieldy.  Could people also confine discussion to more
concise and prcise replies.  Not all of us are blessed with 
access to Lisp Machines or Babyl.

Fanya
-------


∂23-Jan-83  1923	BATALI @ MIT-MC 	Correspondence    
Date: Sunday, 23 January 1983  22:09-EST
Sender: BATALI @ MIT-OZ
From: BATALI @ MIT-MC
To:   phil-sci @ MIT-OZ
Subject: Correspondence

The point of the meta-epistemology proposal was to determine what
sorts of beliefs and attitudes we (as, essentially, God) ought to put
in a robot.  What more can the coherence theorist say to the robot
other than that its beliefs ought to be coherent?  The correspondence
theorist will say that, of course, but will also instil in the robot
the belief in some "objective" reality which is the ultimate test of
theories.  I think that a robot with this idea will be able to do more
than one without.  For example: How would a coherence-theorist robot
ever get the idea to do an experiment?  What do coherence advocates
say an experiment is, anyway?

∂23-Jan-83  2052	MINSKY @ MIT-MC 	Minsky's Theory   
Date: Sunday, 23 January 1983  23:45-EST
Sender: MINSKY @ MIT-OZ
From: MINSKY @ MIT-MC
To:   DAM @ MIT-OZ
Cc:   phil-sci @ MIT-OZ
Subject: Minsky's Theory
In-reply-to: The message of 23 Jan 1983  13:44-EST from DAM


MINSKY:	Most adults regularly use wrong entailment procedures, e.g,, like "If
	most A's are B's and Most B's are C's then most (or many) A's are C's.
	Is it all right for A Priori stuff to be wrong?

	It is often very difficult to seperate "using a wrong entailment

DAM:  	It is often very difficult to seperate "using a wrong entailment
	procedure" from "believing a false generalization".  These
	people may just believe wrong things.

Well, it may be difficult, but I don't see how you find it so easy to
suppose that it is possible for people to believe wrong things, but
not possible for them to employ wrong inferences.  As you know, my
position is that when you talk about

	"the truths which follow from definitions and assumptions"

the procedures one uses to see what "follows" from what are dearly won
during child development - hence the idea of "a priori" seems to me
the delusion of philosophers who live in fantasy worlds of
hypothetical psychology.

The funny part is, I can easily imagine building robots that come into
the world with correct rules of inference and correct procedures for
controlling the value assignments of free and bound variables.  So,
there is no reason that some creature could not be born with correct,
"a priori" mathematical intuitions.  The funny part, then, is that
while that is possible, it clearly isn't true for people.  

Are you really asserting that human intuition about entailment, even
in early childhood, is always perfect, never wrong, and only applied
to wrong suppositions?  If so, what is the evidence?

It occurs to me that we are talking past one another, and using "a
priori" in different ways.  You seem to be saying that "a priori"
means that "mathematics is true", while I always though that "a
priori" meant things like "people just know, without learning or being
told, that (say) mathematics is true".  If that's what we're
discussing, I have no quarrel; I believe that the mathematics I
understand now is OK.  (Unfortunately, my experience has shown that
some of it, at least, will turn out bad from time to time, and I'll
have to change some of the details of those beliefs.)

∂23-Jan-83  2056	MINSKY @ MIT-MC 	Minsky's Theory   
Date: Sunday, 23 January 1983  23:54-EST
Sender: MINSKY @ MIT-OZ
From: MINSKY @ MIT-MC
To:   DAM @ MIT-OZ
Cc:   phil-sci @ MIT-OZ
Subject: Minsky's Theory
In-reply-to: The message of 23 Jan 1983  14:01-EST from DAM


DAM: We think about numbers all the time but we have no introspective
	access to the data structures which "define" or "describe" or
	"determine" what we mean by "a whole number".  I do not claim
	that these data structures are precisely Peano's axioms but I
	wouldn't be surprised if the data structure was some
	"definition" in some "internal sentential language".

Well, I would be surprised if that were the only one, although I would
agree that part of our thinking involves "definitional" elements.  May
I refer to the section in my paper "Why People Think Computers Can't"
about the mentation of number?  (This version is rather better than
the one in Learning Meaning, but not otherwise greatly different.)
The idea is that the human "concept of number" probably cannot be
captured by any single definition, because it involves a web of
reformulation skills (to apply it to different applications) as well
as a variety of different procedural and declarative and mixed
representations.

By the way, I still consider it remotely possible that an intelligent
thinking machine can some day be built using tidy, orderly, single
definitions - along the lines proposed long ago by McCarthy.  But
I see that as very far away at present.

∂23-Jan-83  2111	MINSKY @ MIT-MC 	Minsky's Theory   
Date: Sunday, 23 January 1983  23:58-EST
Sender: MINSKY @ MIT-OZ
From: MINSKY @ MIT-MC
To:   ISAACSON @ USC-ISI
Cc:   DAM @ MIT-OZ, phil-sci @ MIT-MC
Subject: Minsky's Theory
In-reply-to: The message of 23 Jan 1983  16:03-EST from ISAACSON at USC-ISI


ISAACSON:  It ain't that clear-cut, I think.  The mixture of ordinary
	English, the syllogistic template, and notions of a priori
	truth may be misleading.  For example, by saying that "ALL
	birds fly" do you also postulate that in your world birds
	can't be sick, crippled, newly-hatched, and what not?  What if
	Fred HAS a permannently broken wing?

I'm afraid I agree with ISAACSON that there are problems.  Even if we
ignore all the problems of how our syllogistic templates apply to the
real world, how does DAM's apriorism survive what happened to our
beliefs in naive set theory when we faced the barber?

∂24-Jan-83  0130	KDF @ MIT-MC 	Minsky's Theory 
Date: Monday, 24 January 1983  04:27-EST
Sender: KDF @ MIT-OZ
From: KDF @ MIT-MC
To:   MINSKY @ MIT-OZ
Cc:   DAM @ MIT-OZ, phil-sci @ MIT-OZ
Subject: Minsky's Theory
In-reply-to: The message of 23 Jan 1983  23:45-EST from MINSKY

	Psychologists have asked questions like these, and their
evidence comes down firmly on Marvin's side.  The most recent stuff
that comes to mind is that of Johnson-Laird and cohorts (within the
last year or two of Cog.Sci. lies a representative article that has
more pointers).  As I recall, he argues that people usually do simple
logic problems by constructing a model and applying simple procedures
to it.  Many of the errors that arise can be explained by poorly
constructed models.  So far, the evidence is against Modus Ponens
being accessibly wired in...

∂24-Jan-83  0139	GAVAN @ MIT-MC 	Correspondence, Coherence, and Consensus    
Date: Monday, 24 January 1983  04:36-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   BATALI @ MIT-OZ
Cc:   phil-sci @ MIT-OZ
Subject: Correspondence, Coherence, and Consensus
In-reply-to: The message of 23 Jan 1983  22:09-EST from BATALI

    Date: Sunday, 23 January 1983  22:09-EST
    From: BATALI

    The point of the meta-epistemology proposal was to determine what
    sorts of beliefs and attitudes we (as, essentially, God) ought to put
    in a robot.  

Is that really the point?  I thought JMC said it had something to do
with comparing the statements of scientists to the "actual" properties
of the world and then drawing some sort of conclusion about the
extensibility of scientific approaches.  Correct me if I'm wrong.

    What more can the coherence theorist say to the robot
    other than that its beliefs ought to be coherent?  

Well I think more would have to be said to a robot than simply, "make
your beliefs cohere!"  For example, some sort of algorithm would have
to be devised for coherence (relational density?).

    The correspondence
    theorist will say that, of course, but will also instil in the robot
    the belief in some "objective" reality which is the ultimate test of
    theories.  I think that a robot with this idea will be able to do more
    than one without.  

Well no one is saying that a robot shouldn't be able to test theories
against reality.  The coherence theorist doesn't deny reality, only
its objectivity.  You will want this robot to recognize it can
potentially make mistakes, won't you?  The ultimate test of a theory
is hardly "objective" reality.  Only an infallible being could have
access to such a thing (if there is such a thing).

    For example: How would a coherence-theorist robot ever get the
    idea to do an experiment?

The same way a correspondence-theorist robot would.  A robot could
believe in the existence of reality without necessarily believing in
the "objectivity" of its beliefs or the correspondence of its beliefs
to that reality.

    What do coherence advocates say an experiment is, anyway?

An experiment is an empirical test of an hypothesis.  Hypotheses are
drawn from the experiences of the experimenter.  Unless they are null
hypotheses, they are the likely results of an experiment given the
experience of the experimenter.  In other words, a non-null hypothesis
is an expectation of the result of the experiment.  The expectation is
the result that would require the least amount of reformulation of the
experimenters beliefs.  The more coherent or relationally dense these
beliefs are, the more likely will the hypothesis be correct in the
expectations of the experimenter.

Where does the correspondence theorist think hypotheses come from?

∂24-Jan-83  0155	GAVAN @ MIT-MC 	Corrospondence Theory   
Date: Monday, 24 January 1983  04:51-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   DAM @ MIT-OZ
Cc:   JMC @ SU-AI, Phil-sci @ MIT-OZ
Subject: Corrospondence Theory
In-reply-to: The message of 23 Jan 1983  14:38-EST from DAM

    Date: Sunday, 23 January 1983  14:38-EST
    From: DAM
    Sender: DAM
    To:   JMC at SU-AI
    cc:   Phil-sci
    Re:   Corrospondence Theory

    	Date: 21 Jan 83  2249 PST
    	From: John McCarthy <JMC at SU-AI>

    	In the hopes of eliciting some reaction from someone besides GAVAN, who
    	seems not to believe in objective reality, I will again
    	advocate meta-epistemology.

    	I tend to agree with Gavan in that "I do not believe in" a
    single objective reality.  I do believe in objective empirical truth
    accessible through sense data via Occam's razor.  Furthermore I think
    that the corrospondence theory of truth is an extremely important
    paradigm and analytic tool for epistemology. . . .

How can a subject claim access to objective truth?  How can you not
believe in a single objective reality and still find the
correspondence theory to be a useful analytical tool (I agree that
it's an important paradigm)?  When you're defending Occam's razor and
particular theories of truth, please remember that what you claim must
hold for sciences outside the domains of physics and mathematics as
well as inside.


∂24-Jan-83  0205	GAVAN @ MIT-MC 	Minsky's Theory    
Date: Monday, 24 January 1983  05:01-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   MINSKY @ MIT-OZ
Cc:   DAM @ MIT-OZ, phil-sci @ MIT-OZ
Subject: Minsky's Theory
In-reply-to: The message of 23 Jan 1983  23:45-EST from MINSKY

    Date: Sunday, 23 January 1983  23:45-EST
    From: MINSKY

    It occurs to me that we are talking past one another, and using "a
    priori" in different ways.  You seem to be saying that "a priori"
    means that "mathematics is true", while I always though that "a
    priori" meant things like "people just know, without learning or being
    told, that (say) mathematics is true".  If that's what we're
    discussing, I have no quarrel; I believe that the mathematics I
    understand now is OK.  (Unfortunately, my experience has shown that
    some of it, at least, will turn out bad from time to time, and I'll
    have to change some of the details of those beliefs.)

I believe the confusion is a conflation of "pure" and "impure" a
priori.  The pure a priori is what Marvin seems to mean -- the innate
knowledge which does not require empirical experience but instead is
required in order to have any empirical experience.  

I think mathematical knowledge can be taken a priori (as can any
knowledge) but, for humans, it is not pure a priori.

What are the necessary conditions for learning mathematics?

∂24-Jan-83  0422	John McCarthy <JMC@SU-AI> 	objective physical and mathematical worlds 
Date: 24 Jan 83  0111 PST
From: John McCarthy <JMC@SU-AI>
Subject: objective physical and mathematical worlds 
To:   phil-sci@MIT-OZ  

Subject: objective physical and mathematical worlds
In reply to: mainly GAVAN
	Whether the world should be regarded as a construct from
sense data and/or other experience or should be regarded as
existing independently of mind has been argued for centuries.
The same question arises about whether mathematical facts are
to be regarded as independent of the existence of human or other
minds.  I believe that I have convinced most of the participants
in the debate that I am an actual adherent of "realism" in both
cases, and this took some doing.  However, I haven't addressed
the issue itself, mainly because what little I can
add to the debate seems unlikely to change many minds.  However,
GAVAN keeps emitting rhetorical questions like "What world?", so
perhaps I should say something.

	Descartes tries to begin his consideration of philosophy
with a clean slate and argues "Cogito ergo sum".  He does not even
accept the existence of other minds a priori, but considers
their existence to be a consequence of his reasoning.  In order
to get such results, he adopts methods of reasoning so strong that
he can deduce the whole of the Catholic religion - which might raise
suspicions about his "rules of inference" among non Catholics.
Positivists often also propose to start from bare sense experience
and see what can be gotten from that.

	There is, however, another principle from which one might
start, and I'd like to give it the fancy name of "Principle of
philosophical relativity".  Consider taking as a starting principle:
"There is nothing special about me".  Unless there is positive
reason to believe otherwise about some aspect of reality,
I will assume that I am not in a unique position.  If I have
experiences and thoughts of a certain kind, very likely other
people have similar thoughts and experiences.  This corresponds
to common sense prejudice, and indeed we seemed to be programmed
that way.  A week old baby will open its mouth in response to its
mother's open mouth - presumably without having gone through the process of
deducing the existence of other minds and automatically making
a connection between the sight of its mother's mouth, and the
position of its own mouth, which it has never seen.  We may regard
the baby as jumping to mistaken conclusions.
If we refrain from overcoming this apparently built-in principle
of philosophical relativity, we get other minds, other physical
objects and lots more rather early in our philosophical investigation.

	Another argument that impresses me is the following: I
was taught in school about how the earth was formed from
the solar nebula, cooled off, developed life which evolved more
complicated forms, one form of which evolved intelligence, evolved
a culture, and eventually developed institutions of higher learning
in which some of us are even paid to think and argue about
philosophy.  Now I am asked to believe that all this about
life and intelligence evolving isn't to be taken seriously as
something that actually occurred but is to be taken merely as
a convenient way of organizing my experience and predicting
future experience.  I suppose I could manage this change of
viewpoint but am insufficiently motivated by any hope of benefit.

	The question of objective mathematical reality is harder (for me)
to argue about.  Would it be at all convincing to meet extra-terrestrials
and discover that while their mathematics had gone farther in some
directions than ours and less far in others, they talked
about the same basic systems of algebra, topology, analysis and
logic?  Does anyone expect something drastically different?
I'm inclined to take what apparently is a relatively extreme position
among mathematicians, although it was Godel's position, and say (for example)
that the continuum hypothesis is either true or false although it
is much less certain that humans will ever know or will ever even
have a strong opinion.

	There is also a question about what level of certainty
should be demanded before accepting the existence of other minds,
etc.  Many people profess uncertainty about whether the physical
world exists, but don't seem to give the slightest weight to the probability
that they don't in their practical actions.   This suggests that
a test be devised for the seriousness of sense data theorists.
It would involve offering a prize that could be won if there were
relations between sense data apart from those mediated by material
objects.  Someone who put effort into trying to win the prize would
be showing some seriousness about the sense data view.  Perhaps someone
can come up with a better way of formulating such a test.

	Well that's all I can come up with at the moment, though
there's lots more in the literature.

∂24-Jan-83  0731	John Batali <Batali at MIT-OZ at MIT-MC> 	Pragmatics   
Date: Monday, 24 January 1983, 10:28-EST
From: John Batali <Batali at MIT-OZ at MIT-MC>
Subject: Pragmatics
To: phil-sci at MIT-OZ at MIT-MC


It is certainly clear that we aren't going to be able to convince each
other about the existence of objective reality (we can't even agree that
2+2 really is 4, for heaven's sake!).  What we do seem to agree about,
is that belief in such a reality is a pragmatically useful attitude:
  
From DAM:
  Furthermore I think
  that the corrospondence theory of truth is an extremely important
  paradigm and analytic tool for epistemology. 

From GAVAN:
  (I agree that it's (the correspondence theory)
  an important paradigm)

And this is all that I have been defending.  There have been
philosophers who have argued that pragmatism is the best basis for
belief.  They would argue for the existence of objective reality on the
basis of the practicality of believing in it.  I personally tend to
sympathise with this view, but I won't proslytize.

I still claim that a robot ought to have coherent beliefs, and also
ought to understand that it must test its hypotheses against reality.
The correspondence theory of belief, I admit, doesn't include any good
account of hypothesis, but some sort of algorithm would have to be
constructed, just as GAVAN admits some sort of algorithm must be
constructed for keeping beliefs coherent.

From GAVAN:
  A robot could
  believe in the existence of reality without necessarily believing in
  the "objectivity" of its beliefs or the correspondence of its beliefs
  to that reality.

I think that the ultimate justification for the correspondence theory is
to use the real world to select among alternate coherent theories.  The
coherence theory has no way to do this.  This is the justification for
experimentation.  The ultimate justification for a conclusion based on
an experiment is: "because that's the way the world is".

But truly: the big issue, sorely underrepresented in these discussions,
and in the philosophy of science, is: how are theories generated?  How
do we get hypotheses?  Neither the coherence theory or the
correspondence theory account for this.  I think that Marvin's theory of
simple theories goes a bit of the way in showing that pretty much any
approach will lead to numbers.  Marvin's "Theory of Meaning" paper also
has a bunch of ways to generate concepts.

One of the reasons I don't think that the philosophy of science has much
to say to AI is simply that most of it has not been concerned with this
sort of DOING.  I think that, for example, ETHICS might have more useful
bits for AI.

∂24-Jan-83  0859	GAVAN @ MIT-MC 	Pragmatics    
Date: Monday, 24 January 1983  11:45-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   John Batali <Batali @ MIT-OZ>
Cc:   phil-sci @ MIT-OZ
Subject: Pragmatics
In-reply-to: The message of 24 Jan 1983 10:28-EST from John Batali <Batali>

    Date: Monday, 24 January 1983, 10:28-EST
    From: John Batali <Batali>

    It is certainly clear that we aren't going to be able to convince each
    other about the existence of objective reality (we can't even agree that
    2+2 really is 4, for heaven's sake!).  What we do seem to agree about,
    is that belief in such a reality is a pragmatically useful attitude:

Belief in OBJECTIVE reality is surely not pragmatically useful
(depending upon what pragmatics means for you).  If you believe your
version of reality is objective, then be prepared to beat your head
against a wall for the rest of your life.

    From DAM:
      Furthermore I think
      that the corrospondence theory of truth is an extremely important
      paradigm and analytic tool for epistemology. 

    From GAVAN:
      (I agree that it's (the correspondence theory)
      an important paradigm)

Agreeing that a paradigm is important is not equivalent to believing that
its view is correct.  Behaviorism is an important paradigm too, you know.

    And this is all that I have been defending.  There have been
    philosophers who have argued that pragmatism is the best basis for
    belief.  They would argue for the existence of objective reality on the
    basis of the practicality of believing in it.  I personally tend to
    sympathise with this view, but I won't proslytize.

It's "true" that some pragmatists consider themselves metaphysical
realists, but as an internalist who also considers himself a
pragmatist, I feel that one need not say that the reality one
perceives is necessarily the reality that's there (or should I say
"maybe there").  That is, it's not necessary to assume that that
reality is "objective."  In fact, no "subject" can say that his/her
version of reality is "objective."  Not without making me laugh, at
least.  If there is an objective reality, nobody's ever witnessed it.

Also, the version of pragmatism you seem to be espousing is the
impoverished version of James and Dewey.  The founder of pragmatism
was Peirce.  He was morally outraged at this instrumentalist
prostitution of his philosophy.  See his paper, "Issues of
Pragmaticism," in The Monist, 1905.

    I still claim that a robot ought to have coherent beliefs, and also
    ought to understand that it must test its hypotheses against reality.

It can only test its hypotheses against its VERSION of reality.

    The correspondence theory of belief, I admit, doesn't include any good
    account of hypothesis, but some sort of algorithm would have to be
    constructed, just as GAVAN admits some sort of algorithm must be
    constructed for keeping beliefs coherent.

The correspondence theorist who attempts to algorithmize
hypothesis-generation will find that the best hypothesis-generation
algorithm will systematically exclude hypotheses which, if found to be
correct, would mean that the overall structure of the belief system
would have to be revised, unless (and maybe not even then) other
hypotheses have failed to prove satisfactory.  In other words, the
correspondence theorist will have to rely upon knowledge derived from
local levels of coherence in the belief system in order to account for
hypothesis generation (there are other issues involved, of course,
like abductive inference).  Otherwise all sorts of unreasonable
hypotheses could be generated and tested and denied before the
reasonable ones reached the head of the queue.

When an earthquake erupts, why don't we hypothesize that the gods (or
God) is mad at us?  Is it because we can't verify the existence of God
(or the gods) by means of some sort of correspondence?  If so, then
why did the ancient Greeks (and ancient Israelites) often attribute
the cause of calamity to the mood of the gods (or God)?  Did THEY
experience some sort of correspondence?

I think that devising an algorithm for calculating the local coherence
of a belief would be trivial.

    From GAVAN:
      A robot could
      believe in the existence of reality without necessarily believing in
      the "objectivity" of its beliefs or the correspondence of its beliefs
      to that reality.

    I think that the ultimate justification for the correspondence theory is
    to use the real world to select among alternate coherent theories.  The
    coherence theory has no way to do this.  This is the justification for
    experimentation.  The ultimate justification for a conclusion based on
    an experiment is: "because that's the way the world is".

Someone may attempt to justify a conclusion that way, but they're
either lying or misleading themselves.  What they really mean is:
"because that's the way I THINK the world is".  Also, I don't believe
that the coherence theory of truth is concerned with the internal
coherence of one particular theory.  At least it's not so concerned in
the version I believe.  Rather, it's concerned with the overall
coherence of the structure of knowledge.  It posits that a theory or
hypothesis will not be accepted (at the individual level of analysis)
or even formulated if it cannot be made to cohere with the structure
of belief or if forcing it to cohere would mean a 

    But truly: the big issue, sorely underrepresented in these discussions,
    and in the philosophy of science, is: how are theories generated?  How
    do we get hypotheses?  Neither the coherence theory or the
    correspondence theory account for this.  I think that Marvin's theory of
    simple theories goes a bit of the way in showing that pretty much any
    approach will lead to numbers.  Marvin's "Theory of Meaning" paper also
    has a bunch of ways to generate concepts.

These questions are not underrespresented in the philosophy of
science, but perhaps they are in these discussions.  Both the
coherence and the correspondence theories are theories of truth (as is
the consensus theory), not theories of theory-formation or
hypothesis-generation.  So they can hardly be expected to account for
either (although a coherence theorist or a correspondence theorist
should also have a theory for these that is consistent with his/her
theory of truth).

    One of the reasons I don't think that the philosophy of science has much
    to say to AI is simply that most of it has not been concerned with this
    sort of DOING.  I think that, for example, ETHICS might have more useful
    bits for AI.

Do you think that it's only coincidental that many, if not most, of
history's philosophers of science have also been the most prominent
ethical philosophers?  Don't you think that ethics and science have
some sort of intrinsic connection to each other?  Don't you think that
DOING science is also DOING?

∂24-Jan-83  0951	John Batali <Batali at MIT-OZ at MIT-MC> 	Pragmatics   
Date: Monday, 24 January 1983, 12:21-EST
From: John Batali <Batali at MIT-OZ at MIT-MC>
Subject: Pragmatics
To: GAVAN at MIT-MC, Batali at MIT-OZ at MIT-MC
Cc: phil-sci at MIT-OZ at MIT-MC

    From: GAVAN @ MIT-MC

    Belief in OBJECTIVE reality is surely not pragmatically useful
    (depending upon what pragmatics means for you).  If you believe your
    version of reality is objective, then be prepared to beat your head
    against a wall for the rest of your life.

    I feel that one need not say that the reality one
    perceives is necessarily the reality that's there (or should I say
    "maybe there").  That is, it's not necessary to assume that that
    reality is "objective."  In fact, no "subject" can say that his/her
    version of reality is "objective."  Not without making me laugh, at
    least.  If there is an objective reality, nobody's ever witnessed it.

I'm not claiming that my version of reality is objective, or that anyone
should believe that.  Otherwise no one could ever consider the
possibility of being wrong.  What I am claiming is that believing that THERE
IS some objective reality is pragmatically useful.

	I still claim that a robot ought to have coherent beliefs, and also
	ought to understand that it must test its hypotheses against reality.

    It can only test its hypotheses against its VERSION of reality.

When a robot uses its coherent belief-system to generate a prediction
that such and such an experimental setup should produce a reading of 5
on a meter, and observes the reading of 6, SOMETHING is wrong.  And what
is wrong is that the world is not as the theory predicts.  If the result
is as predicted, something is right.

What about sets of equally coherent beliefs?  Which one is to be
accepted?  A robot may not be able to see into a bucket.  It is equally
coherent to say that it is filled with blocks, or empty.  How does it
decide which is true?  Answer: It looks.  What does it use to justify
the taking of the action to look?  Answer:  Because that is the way to
find out how the world is.

    The correspondence theorist who attempts to algorithmize
    hypothesis-generation will find that the best hypothesis-generation
    algorithm will systematically exclude hypotheses which, if found to be
    correct, would mean that the overall structure of the belief system
    would have to be revised, unless (and maybe not even then) other
    hypotheses have failed to prove satisfactory.  In other words, the
    correspondence theorist will have to rely upon knowledge derived from
    local levels of coherence in the belief system in order to account for
    hypothesis generation (there are other issues involved, of course,
    like abductive inference).  Otherwise all sorts of unreasonable
    hypotheses could be generated and tested and denied before the
    reasonable ones reached the head of the queue.

I am not arguing against coherence.  I think that it is crucial.  So, I
think, is correspondence.

    When an earthquake erupts, why don't we hypothesize that the gods (or
    God) is mad at us?  Is it because we can't verify the existence of God
    (or the gods) by means of some sort of correspondence?  If so, then
    why did the ancient Greeks (and ancient Israelites) often attribute
    the cause of calamity to the mood of the gods (or God)?  Did THEY
    experience some sort of correspondence?

They were wrong.

    I think that devising an algorithm for calculating the local coherence
    of a belief would be trivial.

I'm not sure that Godel would agree with you.  Is Fermat's last theorem
coherent with arithmetic?

    Don't you think that
    DOING science is also DOING?

Of course it is.  So is going fishing or playing basketball.  The point
for looking at science from the point of view if AI is to see if science
AS science can tell us about how to write smart programs.  I'm not
convinced.  If the best that can be argued is that science is an
activity, and that studying activities is worthwhile, I would reply that
simpler, individual activities may be more profitable.

∂24-Jan-83  1049	GAVAN @ MIT-MC 	Pragmatics    
Date: Monday, 24 January 1983  13:41-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   John Batali <Batali @ MIT-OZ>
Cc:   phil-sci @ MIT-OZ
Subject: Pragmatics
In-reply-to: The message of 24 Jan 1983 12:21-EST from John Batali <Batali>

    From: John Batali <Batali>

        From: GAVAN @ MIT-MC

    I'm not claiming that my version of reality is objective, or that anyone
    should believe that.  Otherwise no one could ever consider the
    possibility of being wrong.  What I am claiming is that believing that THERE
    IS some objective reality is pragmatically useful.

I understand what you're claiming, and I can agree with everything
except the term "objective."  A smart robot, much like a smart human,
will realize it does not have an objective view of reality.  You can
believe there's a reality.  But how can you believe that there's an
objective reality unless you have independent access both to your mind
and to reality?  If you tell me that it's just part of your religious
faith, then I'll believe you.  But I don't see how it's a necessary
assumption.  I get along just fine without it.

    When a robot uses its coherent belief-system to generate a prediction
    that such and such an experimental setup should produce a reading of 5
    on a meter, and observes the reading of 6, SOMETHING is wrong.  And what
    is wrong is that the world is not as the theory predicts.  If the result
    is as predicted, something is right.

Couldn't it alternatively be that the meter is broken or that the robot has
misperceived the meter reading?  Couldn't there be a bug in the robot?

    What about sets of equally coherent beliefs?  Which one is to be
    accepted?  
    
Maybe both.  Is the cup half empty or half full?  If they're mutually exclusive,
then either toss a coin or devise another experiment.

    A robot may not be able to see into a bucket.  It is equally
    coherent to say that it is filled with blocks, or empty.  How does it
    decide which is true?  Answer: It looks.  What does it use to justify
    the taking of the action to look?  Answer:  Because that is the way to
    find out how the world is.

No. Because that is the way to find out how the world MIGHT BE, or
PROBABLY IS.  If you're actually going to build a robot that
perceives, it should use the perceptual short-cuts you use.  These
short-cuts cut down the amount of computrons you'll have to devote to
perceiving, making perception more powerful.  But they'll also make
perception fallible.  What if you paint blocks (and appropriate
shadows) on the bottom of the bucket?

    I am not arguing against coherence.  I think that it is crucial.  So, I
    think, is correspondence.

As an operational assumption, perhaps.  But NOT as a theory of truth!

        When an earthquake erupts, why don't we hypothesize that the gods (or
        God) is mad at us?  Is it because we can't verify the existence of God
        (or the gods) by means of some sort of correspondence?  If so, then
        why did the ancient Greeks (and ancient Israelites) often attribute
        the cause of calamity to the mood of the gods (or God)?  Did THEY
        experience some sort of correspondence?

    They were wrong.

The point is that they had no correspondence to point to, yet these
people took the moods of the gods to be truly responsible for natural
calamities. 

        I think that devising an algorithm for calculating the local coherence
        of a belief would be trivial.

    I'm not sure that Godel would agree with you.  

I'm not sure Godel matters.  The point is that you calculate it
locally when you need to, and you don't calculate it over the whole
range of beliefs.  If I understand your objection correctly I reject
it for the same reason I reject Winston's rejection of Quillian.

   Is Fermat's last theorem coherent with arithmetic?

Does Olson's logic of collective action correspond to social behavior
at the AI lab?  Does the iron law of oligarchy?  The two-step flow
theory?  Don't ask me to play your language games and I won't ask you
to play mine.

        Don't you think that
        DOING science is also DOING?

    Of course it is.  So is going fishing or playing basketball.  The point
    for looking at science from the point of view if AI is to see if science
    AS science can tell us about how to write smart programs.  I'm not
    convinced.  If the best that can be argued is that science is an
    activity, and that studying activities is worthwhile, I would reply that
    simpler, individual activities may be more profitable.

Who is trying to convince you of this?  I'm certainly not.  Hell, I
agree with you.  See my messages of about a week and a half ago.  The
discussion started NOT because the philosophy of science can tell us
how to write smart programs.  There's no reason to presume it can
(although there are some other, related aspects of philosophy that can
possibly help, as I'm sure you're aware).  Carl Hewitt brought up
scientific communities for metaphorical reasons relating to his apiary
project.  We were discussing how scientific communities come to a
consensus when JMC jumped in with the correspondence theory of truth.
None of this was intended to help you write smart programs.
Apparently, you misperceived something.

Did you wake up on the wrong side of bed this morning, or what?

∂24-Jan-83  1216	GAVAN @ MIT-MC 	subjective physical and mathematical worlds 
Date: Monday, 24 January 1983  12:48-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   John McCarthy <JMC @ SU-AI>
Cc:   phil-sci @ MIT-OZ
Subject: subjective physical and mathematical worlds 
In-reply-to: The message of 24 Jan 83  0111 PST from John McCarthy <JMC at SU-AI>

    Date: 24 Jan 83  0111 PST
    From: John McCarthy <JMC at SU-AI>

    Subject: objective physical and mathematical worlds
    In reply to: mainly GAVAN

    . . .

    	Descartes tries to begin his consideration of philosophy
    with a clean slate and argues "Cogito ergo sum".  He does not even
    accept the existence of other minds a priori, but considers
    their existence to be a consequence of his reasoning.  In order
    to get such results, he adopts methods of reasoning so strong that
    he can deduce the whole of the Catholic religion - which might raise
    suspicions about his "rules of inference" among non Catholics.
    Positivists often also propose to start from bare sense experience
    and see what can be gotten from that.

The problem with the "clean slate theory" (and, yes, it's a problem
for the correspondence theory as well) is that some knowledge is
required in order to have knowledge.  Specifically, in order to be
able to perceive anything (and to count anything, for that matter) we
have to have the concept of space.  In order to be able to perceive
something in motion, we have to have the concept of time.  Space and
time, according to Kant (as long as we're dragging in the names of
philosophers to the consternation of Marvin), are pure and a priori
(see *The Critique of Pure Reason*).

    	There is, however, another principle from which one might
    start, and I'd like to give it the fancy name of "Principle of
    philosophical relativity".  Consider taking as a starting principle:
    "There is nothing special about me".  Unless there is positive
    reason to believe otherwise about some aspect of reality,
    I will assume that I am not in a unique position.  If I have
    experiences and thoughts of a certain kind, very likely other
    people have similar thoughts and experiences.  This corresponds
    to common sense prejudice, and indeed we seemed to be programmed
    that way.  A week old baby will open its mouth in response to its
    mother's open mouth - presumably without having gone through the process of
    deducing the existence of other minds and automatically making
    a connection between the sight of its mother's mouth, and the
    position of its own mouth, which it has never seen.  We may regard
    the baby as jumping to mistaken conclusions.
    If we refrain from overcoming this apparently built-in principle
    of philosophical relativity, we get other minds, other physical
    objects and lots more rather early in our philosophical investigation.

Yes, and Kant may even have been right about the Euclidean nature of a
priori space despite the theory of relativity.  Likewise, the
assumption of a correspondence theory of truth may be innate yet still
be incorrect.

    	Another argument that impresses me is the following: I
    was taught in school about how the earth was formed from
    the solar nebula, cooled off, developed life which evolved more
    complicated forms, one form of which evolved intelligence, evolved
    a culture, and eventually developed institutions of higher learning
    in which some of us are even paid to think and argue about
    philosophy.  Now I am asked to believe that all this about
    life and intelligence evolving isn't to be taken seriously as
    something that actually occurred but is to be taken merely as
    a convenient way of organizing my experience and predicting
    future experience.  I suppose I could manage this change of
    viewpoint but am insufficiently motivated by any hope of benefit.

    . . .

Well, you can believe that these things actually happened if you want,
but some amount of scepticism is healthy I think.  The benefit of
chucking the correspondence theory (the "clean slate theory") is that
it will help you to chuck illusions and delusions.  Suppose that you
were taught in school everything you were taught, with one minor
exception -- that the phenomenon of gravity is due to the sucking
action of the earth and that this also explains why outer space is a
vacuum.  Would you believe it?  Why or why not?

What are optical illusions?

Please remember that I am not denying the existence of reality, only
the objectivity of anyone's experience of reality (and also the idea
of a correspondence).  Sceptics have always been able to demonstrate
that the existence of reality is unprovable but they've never been
able to disprove its existence either.  Even the sceptics engaged in
practice, so they did really assume that the world exists, as I do.
Their point and my point is not that the world doesn't exist.  The
point is that there's no necessary correspondence between what's in
the world and what's in your mind (or what's in your sentences).


∂24-Jan-83  1400	DAM @ MIT-MC 	The Objectivity of Mathematics 
Date: Monday, 24 January 1983  16:38-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   ISAACSON @ USC-ISI
cc:   phil-sci @ MIT-OZ
Subject: The Objectivity of Mathematics


ISAACSON:  For example, by saying that "ALL
	birds fly" do you also postulate that in your world birds
	can't be sick, crippled, newly-hatched, and what not?  What if
	Fred HAS a permannently broken wing?

	This is a clear example of the difficulty of forgeting about
the real world in discussions of mathematics.

	David Mc

∂24-Jan-83  1406	GAVAN @ MIT-MC 	The Objectivity of Mathematics    
Date: Monday, 24 January 1983  16:58-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   DAM @ MIT-OZ
Cc:   MINSKY @ MIT-OZ, phil-sci @ MIT-OZ
Subject: The Objectivity of Mathematics
In-reply-to: The message of 24 Jan 1983  16:12-EST from DAM

    Date: Monday, 24 January 1983  16:12-EST
    From: DAM

    . . .

    	At any rate the failure of the above cognitive theory in no
    way suggests that there are not sophisticated innate mechanisms.

It certainly seems plausible that some sort of logical mechanism might
be innate and used by all (even some "lower" species) at a
sub-conscious or pre-conscious level, or even at the level of the
interaction of brain cells.  I agree with DAM.  Evidence that shows
that some people do not use or reflect upon logical formalisms
consciously or even subconsciously in their daily routines does not
necessarily imply that these mechanisms are not there.

    There is very good (I think) independent evidence for such mechanisms
    (the universality of sentences) and further I think there is good
    independent evidence for the innateness of mathematics.

I'd like to know what the evidence is.

∂24-Jan-83  1415	DAM @ MIT-MC 	objective physical and mathematical worlds    
Date: Monday, 24 January 1983  17:05-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   JCM @ SU-AI
cc:   phil-sci @ MIT-OZ
Subject: objective physical and mathematical worlds


	Date: 24 Jan 83  0111 PST
	From: John McCarthy <JMC at SU-AI>

	The question of objective mathematical reality is harder (for
	me) to argue about.  Would it be at all convincing to meet
	extra-terrestrials and discover that while their mathematics had gone
	farther in some directions than ours and less far in others, they
	talked about the same basic systems of algebra, topology, analysis and
	logic?  Does anyone expect something drastically different?

	I certainly believe that human mathematics is a-priori in the
sense that it is the same for all humans but I would not be surprised
if martians had a different system.  There are lots of different
formal systems one can define and there is probably a computational
sense in which all these languages are expressively equivalent.
Thus even though the innate structures of martians might be different
there would undoubtedly exist a translation procedure from their
system to ours.  At the very worst we could translate a martian
sentence Phi to the human sentence "Phi is deducible by martians".
	I do think that our innate mathematical language is best
understood semantically, i.e. by defining a corrospondence between
sentences and truths of a mathematical universe.  However I think
there are several different universes over which our deductions are
sound.

	David Mc

∂24-Jan-83  1426	DAM @ MIT-MC 	correction 
Date: Monday, 24 January 1983  17:16-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   phil-sci @ MIT-OZ
Subject: correction


	I do NOT think that modern set theory is a good model
of human mathematics.



∂24-Jan-83  1431	John McCarthy <JMC@SU-AI> 	"your version of reality"   
Date: 24 Jan 83  1415 PST
From: John McCarthy <JMC@SU-AI>
Subject: "your version of reality"   
To:   gavan@MIT-OZ, phil-sci@MIT-OZ   

Subject: "your version of reality"
In reply to: GAVAN
    "Belief in OBJECTIVE reality is surely not pragmatically useful
    (depending upon what pragmatics means for you).  If you believe your
    version of reality is objective, then be prepared to beat your head
    against a wall for the rest of your life."

	The phrase "your version of reality" leads to two kinds
of confusion:

	1. Belief in the existence of objective reality, i.e. that
there are facts independent of human experience in general and one's
own in particular, does not require belief in a particular "version
of reality".  Thus I am prepared to learn that there is a wall
where I previously thought there was an opening.  Moreover, this
experience reinforces the doctrine that my beliefs are true only
if they correspond to reality.

	2. A person's beliefs cannot be summarized as a "version
of reality" for two reasons.  First a version of reality would involve
more detail than a human holds - the names of all the people in
the world to begin with.  Our opinions cover only a tiny part of
reality.  Second, even when an AI program's reality is restricted to a
tiny part of the world, e.g. a collection of blocks on a table, its
view cannot in general be regarded as a version of reality.  It may
not have an opinion about the location of some block or it may have
a disjunctive opinion: e.g. it may believe that a certain box
contains a red block or a green block.  This requires distinguishing
states of belief from belief in states of the world - or even in
partial states of the world.  Bob Moore in his M.I.T. master's
thesis emphasized how AI programs whose belief structures were
whole worlds or partial worlds are limited in their capabilities.
The first approximation to a state of belief is a THEORY in the
sense of mathematical logic.  A possible state of reality corresponding
to the state of belief would be a MODEL of the THEORY.  I'm
adopting a convention of capitalizing technical terms.  Unfortunately,
it may be that more sophisticated notions are required.

∂24-Jan-83  1517	John McCarthy <JMC@SU-AI> 	correspondence theory  
Date: 24 Jan 83  1508 PST
From: John McCarthy <JMC@SU-AI>
Subject: correspondence theory  
To:   gavan@MIT-OZ, phil-sci@MIT-OZ   

Subject: correspondence theory
In reply to: GAVAN

GAVAN: "Please remember that I am not denying the existence of
       reality, only the objectivity of anyone's experience of reality
       (and also the idea of a correspondence).  Sceptics have always been
       able to demonstrate that the existence of reality is unprovable but
       they've never been able to disprove its existence either.  Even the
       sceptics engaged in practice, so they did really assume that the
       world exists, as I do.  Their point and my point is not that the
       world doesn't exist.  The point is that there's no necessary
       correspondence between what's in the world and what's in your mind
       (or what's in your sentences)."

     Can it be that most of our arguments have been based on mere
misunderstanding?  The correspondence theory does not require the
correctness of anyone's opinion of reality.  Correspondence is instead the
criterion for the truth of a belief.  In this interpretation I claim to
also speak for the authors referred to in the Encyclopedia article on the
correspondence theory.

     There used to be a further issue about the "objectivity of
observation", i.e. whether trees (directly observed) are as real as
elementary particles, but I think arguments on this subject have died down
- both are real.


∂24-Jan-83  1550	MINSKY @ MIT-MC 	The Objectivity of Mathematics   
Date: Monday, 24 January 1983  18:36-EST
Sender: MINSKY @ MIT-OZ
From: MINSKY @ MIT-MC
To:   GAVAN @ MIT-OZ
Cc:   DAM @ MIT-OZ, phil-sci @ MIT-OZ
Subject: The Objectivity of Mathematics
In-reply-to: The message of 24 Jan 1983  16:58-EST from GAVAN



From DAM: There is very good (I think) independent evidence for such
      mechanisms (the universality of sentences) and further I think
      there is good independent evidence for the innateness of
      mathematics.

The evidence for universality of "sentences" is pretty poor.  What there
is evidence of is that children can learn pretty complicated speech
patterns.  What seems innate, if anything, is "words" - compact units.
The sentence thing only occurs in cultures, and only a portion of
normal speech uses sentences.

∂24-Jan-83  1554	DAM @ MIT-MC 	Objectivity of Mathematics
Date: Monday, 24 January 1983  16:17-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   MINSKY @ MIT-OZ
cc:   phil-sci @ MIT-OZ
Subject: Objectivity of Mathematics


	Now I would like to turn my attention to the philosophical
and mathematical arguments FOR the innateness of mathematics.

	Date: Sunday, 23 January 1983  23:45-EST	
	From: MINSKY

	It occurs to me that we are talking past one another, and using "a
	priori" in different ways.  You seem to be saying that "a priori"
	means that "mathematics is true", while I always though that "a
	priori" meant things like "people just know, without learning or being
	told, that (say) mathematics is true".  If that's what we're
	discussing, I have no quarrel; I believe that the mathematics I
	understand now is OK.  (Unfortunately, my experience has shown that
	some of it, at least, will turn out bad from time to time, and I'll
	have to change some of the details of those beliefs.)

	You do not seem to appreciate the nature of mathematical
truth.  It does not make sense to even talk about a part of
mathematics as being "true" or "false" the way that real world
statements are true or false.  Mathematics addresses the properties of
definitions, all mathematical "truths" are statements which are "true
by definition" and are completely independent of empirical truth.
When you say that "you believe the mathematics you understand now is
ok" how can you concieve of it not being ok.  Consider the simple
syllogism: if ALL foo's are gretchy and Fred is a foo then Fred is
gretchy.  As I have said before ALL mathematical statements are of
precisely this nature.  Are you saying that it is possible that at
some future time you will might not believe this.  Note that this
statement has nothing to do with the real world.  There is no
experiment you can perform to test it, no observation you could make
that would have any bearing on its truth.  Nor is there any
observation you could have made in the past which could lead you to
believe this implication.  The truth of the implication follows from
the DEFINITIONS of the concepts involved.

	I claim that the nature of mathematical truth is such that the
ONLY way mathematical statements can be said to be true at all is if
they are innate.  MATHEMATICAL STATEMENTS HAVE NOTHING TO DO WITH
EXPERIENCE.

	David Mc

∂24-Jan-83  1554	DAM @ MIT-MC 	The Objectivity of Mathematics 
Date: Monday, 24 January 1983  16:34-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   MINSKY @ MIT-OZ
cc:   phil-sci @ MIT-OZ
Subject: The Objectivity of Mathematics


	Date: Sunday, 23 January 1983  23:58-EST
	From: MINSKY

	Even if we ignore all the problems of how our syllogistic templates
	apply to the real world, how does DAM's apriorism survive what
	happened to our beliefs in naive set theory when we faced the barber?

	As I have said before while I do think that mathematics is
innate I do not think that we have any precise theory of what
mathematics is.  That inconsistent set theories can be defined
is not surprising.  I do think that even modern set theory is a
good model of what human mathematics is.  Set theory is defined in
an INFORMAL (english) metatheory and no one (I think) has a good precise
model for what that informal system is.

	Consider a barber whoes age in years is both even and odd.
There is no such barber. so what.

	David Mc

∂24-Jan-83  1553	DAM @ MIT-MC 	The Objectivity of Mathematics 
Date: Monday, 24 January 1983  16:12-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   MINSKY @ MIT-OZ
cc:   phil-sci @ MIT-OZ
Subject: The Objectivity of Mathematics


	It seems that we approach this issue from two different
paradigms.  I am entrenched in the paradigm of mathematics and you
seems to be more concerned with experimental cognitive psychology.  I
think that it is important that we both be able to see the world
through the other's spectacles.  In this message I will try to
express and respond to the ideas presented by you and KDF concerning
psychological experiments.  In a later message I will present further
philosophical arguments for the a-priori nature of mathematics.
	I must admit that I am not very familiar with the experiments
in question.  However I have no doubt that these experiments firmly
refute a certain theory of human cognition.  I would first like to
state the theory which I think is refuted by these experiments.
	Under this theory a subject in an experiment translates an
experimental situation into a set of formulas in some formal language
(say first order predicate calculus).  Furthermore this translation is
suffiently straightforward that the experimentor knows roughly what
fomulas the subject has constructed.  After constructing this formal
representaion of the situation the subject then procedes to perform
deduction and answers questions on the bases of those deductions.
Since the experimentor knows the forumulas the subject is using and
since he knows the valid laws of inference for the formal system he
can predict certain responses.
	I have no trouble believing that any such theory conclusively
fails experimental tests.  The experimental situation is much more
complex than this simple scenario can allow for.  One reason for this
is that any experimental situation takes place in the real
world and any NAIVE subject will use real world heuristic knowledge.
Another complexity comes from the common sense meaning of words.
Consider the word "implies".  The common sense meaning of this word is
quite different (I think) from the precise mathematical meaning.  It
is rediculous to think that a child or naive adult translates an
english sentence using "implies" in the standard precise mathematical
way.  It takes a lot of studying before one understands the precise
mathematical meaning of this word.  It also takes a lot of work before
one can divorce onself from real world knowledge in mathematical
contexts.
	I think that there are lots of innate cognitive mechanisms and
that tautological deduction is only a minor component of most common
sense cognition.  I don't think that any experimentor can claim enough
of an understanding of the cognition of a child to be able to isolate
one innate component amoung many.  Adults must be trained and must
work hard before they can achieve this isolation.
	At any rate the failure of the above cognitive theory in no
way suggests that there are not sophisticated innate mechanisms.
There is very good (I think) independent evidence for such mechanisms
(the universality of sentences) and further I think there is good
independent evidence for the innateness of mathematics.

	David Mc

∂24-Jan-83  1553	DAM @ MIT-MC 	The Objectivity of Mathematics 
Date: Monday, 24 January 1983  16:14-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   MINSKY @ MIT-OZ
cc:   phil-sci @ MIT-OZ
Subject: The Objectivity of Mathematics


	Date: Sunday, 23 January 1983  23:54-EST
	From: MINSKY

	By the way, I still consider it remotely possible that an intelligent
	thinking machine can some day be built using tidy, orderly, single
	definitions - along the lines proposed long ago by McCarthy.  But
	I see that as very far away at present.

	The idea that real artificial intelligence is close at hand
seems to me the delusion of AI researchers who live in fantasy worlds
of hypothetical engineering.

∂24-Jan-83  1631	KDF @ MIT-MC 	The Objectivity of Mathematics 
Date: Monday, 24 January 1983  19:11-EST
Sender: KDF @ MIT-OZ
From: KDF @ MIT-MC
To:   DAM @ MIT-OZ
Cc:   MINSKY @ MIT-OZ, phil-sci @ MIT-OZ
Subject: The Objectivity of Mathematics
In-reply-to: The message of 24 Jan 1983  16:34-EST from DAM

	Here is the example from Johnson-Laird's paper:

	"All of the singers are professors"
	"All of the poets are professors"

There are several buggy conclusions people draw, such as:

	"All of the poets are singers"
	"Some of the singers are poets"

These are just class inclusion and do not rely on the formal reading
of the word "implies".  (note: while the data are interesting, the
rest of the paper is pretty muddled.)

	There is an interesting point hidden in DAM's message - that
people ISOLATE some innate mechanism, harnessing it for conscious use,
rather than INVENTING reliable techniques.  Note that this is a very
different sense from Marvin's "discovery" of arithmetic because of the
sparseness of the space of theories it lies in.
	I personally wouldn't be too surprised if we looked at some level
of description for a mind and found something that looked more or less
like M.P. Boole may have been right!  However, that does not mean that
a theory organized around logic will contain everything needed for a theory
of mind.  In particular, "administrative" issues (the more fashionable way
of saying "control") arise, as painful experience has shown.  Although
the data is controversial, there are also strong indications that we do
not use a single, uniform way of drawing conclusions.  Our perceptual system
seems to get into the act with diagrams, for instance.  The idealization of
mind to a statement manipulator may throw away too many of the interesting
phenomena.

∂24-Jan-83  1658	ISAACSON at USC-ISI 	Re:  The objectivity of mathematics    
Date: 24 Jan 1983 1620-PST
Sender: ISAACSON at USC-ISI
Subject: Re:  The objectivity of mathematics
From: ISAACSON at USC-ISI
To: DAM at MIT-MC
Cc: phil-sci at MIT-MC, isaacson at USC-ISI
Message-ID: <[USC-ISI]24-Jan-83 16:20:14.ISAACSON>

In-Reply-To: Your message of Monday, 24 Jan 1983, 16:14-EST


DAM: "Consider the simple syllogism: if ALL foo's are gretchy and
Fred is a foo then Fred is gretchy.  As I have said before ALL
mathematical statements are of precisely this nature.  Are you
saying that it is possible that at some future time you might not
believe this."


Well, I think that perfectly good mathematicians don't entirely
believe that right now!  The "simple" (simplistic ?) syllogism
you posit as the cornerstone of ALL mathematical statements [a
position open to severe criticisms, way beyond what is intended
here] presupposes the validity of the law of excluded middle
[tertium non datur].  As you may know, the Dutch school of
Intuitionism refuses (with considerable consistency and success)
to swallow that principle and has developed a ceratin logic that,
to put it mildly, circumvents naive Aristotelian syllogisms such
as you promote so vigorously.

I respectfully submit that your concept of mathematics is due its
periodic reformulation.

-- Joel



∂24-Jan-83  1700	ISAACSON at USC-ISI 	Re:  The objectivity of discussing mathematics   
Date: 24 Jan 1983 1631-PST
Sender: ISAACSON at USC-ISI
Subject: Re:  The objectivity of discussing mathematics
From: ISAACSON at USC-ISI
To: DAM at MIT-MC
Cc: phil-sci at MIT-MC, isaacson at USC-ISI
Message-ID: <[USC-ISI]24-Jan-83 16:31:12.ISAACSON>


In-Reply-To: Your message of Monday, 24 Jan 1983, 16:38-EST


DAM: "This is a clear example of the difficulty of forgetting
about the real world in discussions of mathematics."


Hey, wait a minute.  I anticipated a "clever" response such as
that and effectively pre-empted it in the subsequent paragraph,
which you don't care to quote.  Please respond to the substance
of my arguments.


∂25-Jan-83  1103	DAM @ MIT-MC 	The Objectivity of Mathematics 
Date: Tuesday, 25 January 1983  12:03-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   MINSKY @ MIT-OZ
cc:   phil-sci @ MIT-OZ
Subject: The Objectivity of Mathematics


	Date: Monday, 24 January 1983  18:36-EST
	From: MINSKY

	The evidence for universality of "sentences" is pretty poor.  What
	there is evidence of is that children can learn pretty complicated
	speech patterns.  What seems innate, if anything, is "words" - compact
	units.  The sentence thing only occurs in cultures, and only a portion
	of normal speech uses sentences.

	I suspect that any linguist would consider the above statement
ridiculous.  However the whole field of modern linguistics is predicated
on the notion of sentence; a language is DEFINED as the set of SENTENCES
judged grammatical.
	Linguistics aside I am a little confused by your statement.
Are you claiming that there are human cultures which do not use
sentences as a (the) primary form communication?  I would think (and
have been told) that not only is the notion of sentence a human
cultural universal but that all human sentences are composed of a noun
and a verb phrase.
	Perhaps you are claiming that sentences are only cultural in
the sense that a human raised without other human contact would not
develope sentences.  Well a child raised in the dark has an atrophied
visual system, this does not imply that there is no innate visual
system.

	David Mc

∂25-Jan-83  1104	DAM @ MIT-MC 	The Objectivity of Mathematics 
Date: Tuesday, 25 January 1983  12:13-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   KDF @ MIT-OZ
cc:   phil-sci @ MIT-OZ
Subject: The Objectivity of Mathematics


	Date: Monday, 24 January 1983  19:11-EST
	From: KDF

	Here is the example from Johnson-Laird's paper:

	"All of the singers are professors"
	"All of the poets are professors"

	There are several buggy conclusions people draw, such as:

	"All of the poets are singers"
	"Some of the singers are poets"

	These are just class inclusion and do not rely on the formal reading
	of the word "implies".  (note: while the data are interesting, the
	rest of the paper is pretty muddled.)

	Well "implies" is not an issue here but there is still the problem
of real world knowledge.  If I was told the above things about an actual room
full of people, and I did not understand that it was a logic experiment,
I would consider these conclusions pretty likely (it could be a convention
of academic music theory types, why else would you tell me about poets
and singers and professors).

	Of course I agree with the bulk of your last message.

	David Mc

∂25-Jan-83  1104	DAM @ MIT-MC 	Objectivity of Mathematics
Date: Tuesday, 25 January 1983  12:35-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   ISAACSON @ USC-ISI
cc:   phil-sci @ MIT-OZ
Subject: Objectivity of Mathematics


	Date: Monday, 24 January 1983  19:20-EST
	From: ISAACSON at USC-ISI

	The "simple" (simplistic ?) syllogism you posit as the cornerstone of
	ALL mathematical statements [a position open to severe criticisms, way
	beyond what is intended here] presupposes the validity of the law of
	excluded middle [tertium non datur].

	I did not meen to imply that I consider this simple syllogism to
be the cornerstone of mathematics, only that I consider all mathematical
truth to be just as intuatively undeniable once understood.

	As you may know, the Dutch school of Intuitionism refuses (with
	considerable consistency and success) to swallow that principle and
	has developed a ceratin logic that, to put it mildly, circumvents
	naive Aristotelian syllogisms such as you promote so vigorously.

	I addressed this issue in an earlier message to Hewitt and will
summarize my response here.  The basic problem (I think) is the translation
of a word like "implies" into the innate language of thought (assuming
there is one).  There are several possible translations (I think) and
these lead to different mathematics.  The interesting thing is that any
mathematician can understand any other mathematician's translation.  I can
understand (and agree with) intuitionism, I simply interpret implies
differently.  An inutitionist can understand a more classical mathematician.
The truths of each branch of mathematics are objective given the definitions
of that branch of mathematics.  It is important to note that mathematics
is really always done in an informal (natural language) system and no
one (I think) has a precise (formal) understanding of that system.
	For those who have heard this before sorry about the redundancy
but sometimes redundancy is a good thing.

	David Mc.

P.S.  Sorry about the abuse of your earlier message, I couldn't resist the
temptation to use it to support my points about real world knowledge.

∂25-Jan-83  1114	MINSKY @ MIT-MC 	The Objectivity of Mathematics   
Date: Tuesday, 25 January 1983  13:58-EST
Sender: MINSKY @ MIT-OZ
From: MINSKY @ MIT-MC
To:   DAM @ MIT-OZ
Cc:   phil-sci @ MIT-OZ
Subject: The Objectivity of Mathematics
In-reply-to: The message of 25 Jan 1983  12:03-EST from DAM


DAM:	I suspect that any linguist would consider the above statement
    ridiculous.  However the whole field of modern linguistics is predicated
    on the notion of sentence; a language is DEFINED as the set of SENTENCES
    judged grammatical.

Really, this is too low a level for discussion.  The Chomskian linguists
might take that attitude.  But we should discuss philopophy, not religion.

∂25-Jan-83  1353	GAVAN @ MIT-MC 	"your version of reality"    
Date: Tuesday, 25 January 1983  16:46-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   John McCarthy <JMC @ SU-AI>
Cc:   phil-sci @ MIT-OZ
Subject: "your version of reality"   
In-reply-to: The message of 24 Jan 83  1415 PST from John McCarthy <JMC at SU-AI>

    From: John McCarthy <JMC at SU-AI>

        In reply to: GAVAN

        "Belief in OBJECTIVE reality is surely not pragmatically useful
        (depending upon what pragmatics means for you).  If you believe your
        version of reality is objective, then be prepared to beat your head
        against a wall for the rest of your life."

    	The phrase "your version of reality" leads to two kinds
    of confusion:

    	1. Belief in the existence of objective reality, i.e. that
    there are facts independent of human experience in general and one's
    own in particular, does not require belief in a particular "version
    of reality".  

Anyone's belief in the existence of an "objective" reality can be
nothing more than an article of faith.  However you answer the
question "if a tree falls in the forest and nobody hears it, did it
make a sound?" you cannot prove your answer to be correct unless you
have some independent access to reality outside your image of it (the
"God's-eye view").  As soon as you tell me that you believe "there are
facts independent of human experience" I will ask "How do you know?"
and "facts about what?"  You will not be able to answer the first
question unless you're God or unless you tell me it's an article of
faith on your part.  You'll probably answer the second by saying
something like "facts about reality."  I'll then ask you "what
reality?"  You'll respond "the reality I experience."  Then I'll say,
"Then you're not talking about facts independent of human experience!"

    Thus I am prepared to learn that there is a wall
    where I previously thought there was an opening.  Moreover, this
    experience reinforces the doctrine that my beliefs are true only
    if they correspond to reality.

Well, if you want to continue to maintain the correspondence theory,
you could say that your beliefs are true only if they correspond to
your image or "version" of reality (and this of course is nothing more
than an ensemble of other beliefs).  I would prefer to say that my
beliefs about things in my image of reality are true only if they
cohere with my other beliefs about things in my image of reality.  I
am prepared to recognize the wall where I thought there was an opening
because I am not presumptuous enough to assume that my image of
reality truly corresponds to the reality my faith tells me is there.

    	2. A person's beliefs cannot be summarized as a "version
    of reality" for two reasons.  First a version of reality would involve
    more detail than a human holds - the names of all the people in
    the world to begin with.  Our opinions cover only a tiny part of
    reality.  

By a "version of reality" I certainly don't mean an exhaustive
version.  Of course that's impossible.  If it's true that our opinions
cover only a tiny part of reality (and I think that it is true), then
how can we say that we could have a belief that corresponds to
something in that reality?  Even if our perceptual images are not
distorted (and they probably are to some degree) they only summarize
what's in the "real" perceptual field.  Haven't you ever "overlooked"
anything?

    Second, even when an AI program's reality is restricted to a
    tiny part of the world, e.g. a collection of blocks on a table, its
    view cannot in general be regarded as a version of reality.  It may
    not have an opinion about the location of some block or it may have
    a disjunctive opinion: e.g. it may believe that a certain box
    contains a red block or a green block.  This requires distinguishing
    states of belief from belief in states of the world - or even in
    partial states of the world.  

I take this to be a good reason to reject the correspondence theory of
truth.

    Bob Moore in his M.I.T. master's
    thesis emphasized how AI programs whose belief structures were
    whole worlds or partial worlds are limited in their capabilities.
    The first approximation to a state of belief is a THEORY in the
    sense of mathematical logic.  A possible state of reality corresponding
    to the state of belief would be a MODEL of the THEORY.  I'm
    adopting a convention of capitalizing technical terms.  Unfortunately,
    it may be that more sophisticated notions are required.

OK, so what?  It seems to me that any theory or any model is a
reduction.  Even the best such reduction cannot possibly correspond to
what has been reduced, since it is a reduction.  Even if a model or
theory is not mis-specified, it's always under-specified.  Otherwise
it's not a model, but the thing itself.  So where's the
correspondence?

∂25-Jan-83  1357	GAVAN @ MIT-MC 	correspondence theory   
Date: Tuesday, 25 January 1983  16:50-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   John McCarthy <JMC @ SU-AI>
Cc:   phil-sci @ MIT-OZ
Subject: correspondence theory  
In-reply-to: The message of 24 Jan 83  1508 PST from John McCarthy <JMC at SU-AI>

    Date: 24 Jan 83  1508 PST
    From: John McCarthy <JMC at SU-AI>
    To:   gavan, phil-sci at MIT-OZ
    Re:   correspondence theory  

    Subject: correspondence theory
    In reply to: GAVAN

    GAVAN: "Please remember that I am not denying the existence of
           reality, only the objectivity of anyone's experience of reality
           (and also the idea of a correspondence).  Sceptics have always been
           able to demonstrate that the existence of reality is unprovable but
           they've never been able to disprove its existence either.  Even the
           sceptics engaged in practice, so they did really assume that the
           world exists, as I do.  Their point and my point is not that the
           world doesn't exist.  The point is that there's no necessary
           correspondence between what's in the world and what's in your mind
           (or what's in your sentences)."

         Can it be that most of our arguments have been based on mere
    misunderstanding?  The correspondence theory does not require the
    correctness of anyone's opinion of reality.  Correspondence is instead the
    criterion for the truth of a belief.  In this interpretation I claim to
    also speak for the authors referred to in the Encyclopedia article on the
    correspondence theory.

As I understand the correspondence theory, it posits that there is a
one-to-one correspondence between things in the world and things in
the mind  --  that we "copy" the world, starting with a "clean slate."

         There used to be a further issue about the "objectivity of
    observation", i.e. whether trees (directly observed) are as real as
    elementary particles, but I think arguments on this subject have died down
    - both are real.

How do you know?

∂25-Jan-83  1404	GAVAN @ MIT-MC 	The Objectivity of Mathematics    
Date: Tuesday, 25 January 1983  16:54-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   MINSKY @ MIT-OZ
Cc:   DAM @ MIT-OZ, phil-sci @ MIT-OZ
Subject: The Objectivity of Mathematics
In-reply-to: The message of 24 Jan 1983  18:36-EST from MINSKY

    Date: Monday, 24 January 1983  18:36-EST
    From: MINSKY
    Sender: MINSKY
    To:   GAVAN
    cc:   DAM, phil-sci
    Re:   The Objectivity of Mathematics

    From DAM: There is very good (I think) independent evidence for such
          mechanisms (the universality of sentences) and further I think
          there is good independent evidence for the innateness of
          mathematics.

    The evidence for universality of "sentences" is pretty poor.  What there
    is evidence of is that children can learn pretty complicated speech
    patterns.  What seems innate, if anything, is "words" - compact units.
    The sentence thing only occurs in cultures, and only a portion of
    normal speech uses sentences.

I agree.  But this doesn't mean that some sort of logical mechanism is
not innate.  Its "sentences" may not be anything like the sentences of
natural language or even of logic, but might instead be configurations
of brain cells.  Is there not a logic to a DNA molecule?

∂25-Jan-83  1512	BATALI @ MIT-MC 	correspondence theory  
Date: Tuesday, 25 January 1983  17:22-EST
Sender: BATALI @ MIT-OZ
From: BATALI @ MIT-MC
To:   GAVAN @ MIT-OZ
Cc:   John McCarthy <JMC @ SU-AI>, phil-sci @ MIT-OZ
Subject: correspondence theory  
In-reply-to: The message of 25 Jan 1983  16:50-EST from GAVAN


    From: GAVAN

    As I understand the correspondence theory, it posits that there is a
    one-to-one correspondence between things in the world and things in
    the mind  --  that we "copy" the world, starting with a "clean slate."

No.  From Bertrand Russell:  "truth consists in some form of
correspondence between belief and fact."  The point is that there are
objective facts, and the truth of sentences depends on those facts.

Nowhere is any one-to-one correspondence assumed.  In fact, the
correspondence theory isn't about minds at all -- it is about
sentences or propositions or whatever can be said to be true or
false. 

Nowhere is any claim made about "copying" the world.

And the correspondence theory says aboslutely nothing about what or
what not is innate.

It seems that we might really be arguing about different things.

∂25-Jan-83  1553	DAM @ MIT-MC 	The Objectivity of Mathematics 
Date: Tuesday, 25 January 1983  18:25-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   MINSKY @ MIT-OZ
cc:   phil-sci @ MIT-OZ
Subject: The Objectivity of Mathematics


	Date: Tuesday, 25 January 1983  13:58-EST
	From: MINSKY

	Really, this is too low a level for discussion.  The Chomskian
	linguists might take that attitude.  But we should discuss philopophy,
	not religion.

	I thought the question of the human universality of sentences
was to be taken as an empirical issue.  Even if we ignore Chomskian
linguistics the human universality of sentences seems empirically
undeniable.

	David Mc

∂25-Jan-83  1612	GAVAN @ MIT-MC 	correspondence theory   
Date: Tuesday, 25 January 1983  18:54-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   BATALI @ MIT-OZ
Cc:   John McCarthy <JMC @ SU-AI>, phil-sci @ MIT-OZ
Subject: correspondence theory  
In-reply-to: The message of 25 Jan 1983  17:22-EST from BATALI

    From: BATALI

        From: GAVAN

        As I understand the correspondence theory, it posits that there is a
        one-to-one correspondence between things in the world and things in
        the mind  --  that we "copy" the world, starting with a "clean slate."

    No.  From Bertrand Russell:  "truth consists in some form of
    correspondence between belief and fact."  The point is that there are
    objective facts, and the truth of sentences depends on those facts.

There aren't any objective facts.  That's my argument by assertion.  My proof
is this:  No fact is objective because only subjects can have the concept of
a "fact."  There are only subjective facts.  Now, try to prove that there are
objective facts.

    Nowhere is any one-to-one correspondence assumed.  In fact, the
    correspondence theory isn't about minds at all -- it is about
    sentences or propositions or whatever can be said to be true or
    false. 

What, other than a mind, emits sentences and propositions?  If the
correspondence referred to is something like Tarski's "It's true that
foo is bar iff foo is bar," then the correspondence theory is not only
wrong, it's meaningless.

    Nowhere is any claim made about "copying" the world.

Oh, I think Locke said something about this, as I recall.  Maybe Hume as well.
I'll have to look this one up.  Putnam seems to equate the two in *Reason, 
Truth, and History*.  Where have you looked?

    And the correspondence theory says aboslutely nothing about what or
    what not is innate.

Well, it certainly IMPLIES something about it.

    It seems that we might really be arguing about different things.

Maybe.  Maybe not.  I still object to any talk about "objective" reality.

∂25-Jan-83  1622	BATALI @ MIT-MC 	Practical Necessity    
Date: Tuesday, 25 January 1983  19:07-EST
Sender: BATALI @ MIT-OZ
From: BATALI @ MIT-MC
To:   GAVAN @ MIT-OZ
Cc:   John McCarthy <JMC @ SU-AI>, phil-sci @ MIT-OZ
Subject: Practical Necessity
In-reply-to: The message of 25 Jan 1983  16:46-EST from GAVAN

    From: GAVAN

    I would prefer to say that my
    beliefs about things in my image of reality are true only if they
    cohere with my other beliefs about things in my image of reality.  I
    am prepared to recognize the wall where I thought there was an opening
    because I am not presumptuous enough to assume that my image of
    reality truly corresponds to the reality my faith tells me is there.

This seens like a rather convoluted way of just acting in accord with
objective reality.  Why affix the "image of" to all mentions of
reality?  Certainly reality may be such that we can just get a
subjective picture of it, but it is still reasonable to act as if it
is there.

Suppose that something believed as an article of faith is such that it
is virtually impossible not to hold it and deal effectively with one's
goals.  It seems that such a belief is held not as a matter of faith
but of (practical) necessity.  This is what I understand Kant as
saying (to drop a relatively heavy name).  I certainly must understand
the limitations of the faith -- but nevertheless I must hold it.
Except, perhaps, for mathematical truth, I don't think that there are
any more strong states of belief than those in such practical
necessities.  To the degree that if belief in X is justfied then X is
true, "objective reality" is thus "true."

I suppose the truth of practical necessities is just as much a matter
of faith as anything else, and is probably not necessary itself, so
perhaps this argument won't convince anyone.  On the other hand, it is
the only argument I can think of in support of the belief: "beliefs
must be coherent."  

∂25-Jan-83  1633	John McCarthy <JMC@SU-AI> 	correspondence theory, misunderstanding thereof 
Date: 25 Jan 83  1509 PST
From: John McCarthy <JMC@SU-AI>
Subject: correspondence theory, misunderstanding thereof 
To:   gavan@MIT-OZ
CC:   phil-sci@MIT-OZ  

Subject: correspondence theory, misunderstanding thereof
In reply to: GAVAN
GAVAN:	`As I understand the correspondence theory, it posits that there is
	a one-to-one correspondence between things in the world and things
	in the mind -- that we "copy" the world, starting with a "clean
	slate."'

I think you misunderstand the correspondence theory.  I don't believe that
any of its adherents would claim that the correspondence between the
things in the world and things in the mind is one-to-one.  I refer again
to the Enclyclopedia article for others' views.  For myself, the correspondence
involved in the truth conditions for a sentence can be quite complex.
The important point is the existence of reality independent of human
experience.  You can call it an article of faith if you like; I'd
call it a bet.


∂25-Jan-83  1642	GAVAN @ MIT-MC 	correspondence theory, misunderstanding thereof  
Date: Tuesday, 25 January 1983  19:23-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   John McCarthy <JMC @ SU-AI>
Cc:   phil-sci @ MIT-OZ
Subject: correspondence theory, misunderstanding thereof 
In-reply-to: The message of 25 Jan 83  1509 PST from John McCarthy <JMC at SU-AI>

    Date: 25 Jan 83  1509 PST
    From: John McCarthy <JMC at SU-AI>
    To:   gavan
    cc:   phil-sci at MIT-OZ
    Re:   correspondence theory, misunderstanding thereof 

    Subject: correspondence theory, misunderstanding thereof
    In reply to: GAVAN
    GAVAN:	`As I understand the correspondence theory, it posits that there is
    	a one-to-one correspondence between things in the world and things
    	in the mind -- that we "copy" the world, starting with a "clean
    	slate."'

    I think you misunderstand the correspondence theory.  I don't believe that
    any of its adherents would claim that the correspondence between the
    things in the world and things in the mind is one-to-one.  

I was hoping for this response.  That's why I put the one-to-one expression
in my message.  If the correspondence is not one-to-one, what is it?  If the
supposed correspondence is one-to-many, many-to-one, or many-to-many, then
what sense does it make to talk about a correspondence at all?

    I refer again to the Enclyclopedia article for others' views.  

Of course, there's also the primary source material . . .

    For myself, the correspondence involved in the truth conditions for a
    sentence can be quite complex.  The important point is the existence
    of reality independent of human experience.  You can call it an
    article of faith if you like; I'd call it a bet.

OK.  Call it a bet.  You'll never collect unless you can find someone with
a God's-eye view.  

My point is that you're not positing a correspondence between a
sentence or a set of sentences and something in a "reality independent
of human experience".  You have no access to any such "reality
independent of human experience" unless you're God, and you're not.
You only have access to the reality you believe is there -- to the
reality you have an image of.  So if you believe the sentence you
utter and you compare it to something in the "real world", you're
actually comparing the belief you uttered and your beliefs about the
world.  The belief you utter (the sentence) is true (for you) iff it
coheres with your image of the world.  This is the COHERENCE theory of
truth.   

Then of course there's the CONSENSUS theory of truth . . .

∂25-Jan-83  1645	BATALI @ MIT-MC 	correspondence theory  
Date: Tuesday, 25 January 1983  19:26-EST
Sender: BATALI @ MIT-OZ
From: BATALI @ MIT-MC
To:   GAVAN @ MIT-OZ
Cc:   John McCarthy <JMC @ SU-AI>, phil-sci @ MIT-OZ
Subject: correspondence theory  
In-reply-to: The message of 25 Jan 1983  18:54-EST from GAVAN

    From: GAVAN

        From: BATALI

        No.  From Bertrand Russell:  "truth consists in some form of
        correspondence between belief and fact."  The point is that there are
        objective facts, and the truth of sentences depends on those here.

  There aren't facts any objective facts.  That's my argument by
  assertion.  My proof is this: No fact is objective because only
  subjects can have the concept of a "fact."  There are only subjective
  facts.  Now, try to prove that there are objective facts.

I wasn't trying to PROVE the correspondence theory in this passage,
only to define it.  What I said is, I think, Bertrand Russel's point.

        Nowhere is any one-to-one correspondence assumed.  In fact, the
        correspondence theory isn't about minds at all -- it is about
        sentences or propositions or whatever can be said to be true or
        false. 

    What, other than a mind, emits sentences and propositions?  If the
    correspondence referred to is something like Tarski's "It's true that
    foo is bar iff foo is bar," then the correspondence theory is not only
    wrong, it's meaningless.

Tarski's idea is this: "Foo is bar" is true iff foo is bar.  The point
here is the relation between a representation (the string "Foo is
bar") with a certain state of affairs.  The distinction is between
using the string to describe a state of affairs and mentioning it to
discuss its truth conditions.  I don't claim that this is profound or
even correct.  But it is meaningful: it posits a relationship between
sentences and states of affairs they describe.

Also: I admit that minds emit and receive propositions and sentences.
But, as Putnam argues "meanings just ain't in the head!."  The
meanings of propositions (whatever they are) depend on external states
of affairs.  Certainly the reference relationship does.  When I type
GAVAN, I am not referring to anything in my head.  I am referring to
some person in the world,  And the truth of statements I might make
about that person depends on facts about that person.

        Nowhere is any claim made about "copying" the world.

   Oh, I think Locke said something about this, as I recall.  Maybe Hume
   as well.  I'll have to look this one up.  Putnam seems to equate the
   two in *Reason, Truth, and History*.  Where have you looked?

Hume talks about concepts "resembling" their referents.  Fodor
actually worries quite bit about this.  "Resemblance", and "copying"
can't be the representation relationship because they are both very
ill defined.  How could some patterns of neuron firing ever resemble a
duck?  This problem of reference is indeed a hard one, and the
functionalist philosophers seem to be making headway at it.  But it is
still a hard problem whether or not you are a correspondence theorist,

        And the correspondence theory says aboslutely nothing about what or
        what not is innate.

    Well, it certainly IMPLIES something about it.

What?  

        It seems that we might really be arguing about different things.

    Maybe.  Maybe not.  I still object to any talk about "objective" reality.

Actually (snicker), you seem to enjoy it quite a lot.  So do I.

∂25-Jan-83  1655	GAVAN @ MIT-MC 	sentences
Date: Tuesday, 25 January 1983  19:41-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   DAM @ MIT-OZ
Cc:   MINSKY @ MIT-OZ, phil-sci @ MIT-OZ
Subject: sentences
In-reply-to: The message of 25 Jan 1983  18:25-EST from DAM

    Date: Tuesday, 25 January 1983  18:25-EST
    From: DAM

    	Date: Tuesday, 25 January 1983  13:58-EST
    	From: MINSKY

    	Really, this is too low a level for discussion.  The Chomskian
    	linguists might take that attitude.  But we should discuss philopophy,
    	not religion.

    	I thought the question of the human universality of sentences
    was to be taken as an empirical issue.  Even if we ignore Chomskian
    linguistics the human universality of sentences seems empirically
    undeniable.

I think Marvin was objecting to the normative idea of grammaticality.
What constitutes a sentence?  Earlier you said something about a
subject and a verb phrase.  What constitutes a subject and a verb
phrase?

∂25-Jan-83  1703	GAVAN @ MIT-MC 	Practical Necessity
Date: Tuesday, 25 January 1983  19:56-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   BATALI @ MIT-OZ
Cc:   John McCarthy <JMC @ SU-AI>, phil-sci @ MIT-OZ
Subject: Practical Necessity
In-reply-to: The message of 25 Jan 1983  19:07-EST from BATALI

    From: BATALI

        From: GAVAN

        I would prefer to say that my
        beliefs about things in my image of reality are true only if they
        cohere with my other beliefs about things in my image of reality.  I
        am prepared to recognize the wall where I thought there was an opening
        because I am not presumptuous enough to assume that my image of
        reality truly corresponds to the reality my faith tells me is there.

    This seens like a rather convoluted way of just acting in accord with
    objective reality.  

No such thing.

    Why affix the "image of" to all mentions of reality?  

Because that's all I have access to.

    Certainly reality may be such that we can just get a
    subjective picture of it, but it is still reasonable to act as if it
    is there.

Not always.

    Suppose that something believed as an article of faith is such that it
    is virtually impossible not to hold it and deal effectively with one's
    goals.  

I can suppose this, but I don't need to.  I can deal effectively with my own
goals without believing that my image of reality is objective, thank you.
In fact, I can deal with my own goals more effectively precisely because I
recognize that my image of reality is subjective.

    It seems that such a belief is held not as a matter of faith
    but of (practical) necessity.  This is what I understand Kant as
    saying (to drop a relatively heavy name).  I certainly must understand
    the limitations of the faith -- but nevertheless I must hold it.

The problem is, some people of this faith don't realize its limits.
Try not holding this faith for a couple of hours.  See if you walk
into any walls.

    Except, perhaps, for mathematical truth, I don't think that there are
    any more strong states of belief than those in such practical
    necessities.  To the degree that if belief in X is justfied then X is
    true, "objective reality" is thus "true."

What constitutes justification?  (Yow! Have we returned to Lakatos yet?)

    I suppose the truth of practical necessities is just as much a matter
    of faith as anything else, and is probably not necessary itself, so
    perhaps this argument won't convince anyone.  On the other hand, it is
    the only argument I can think of in support of the belief: "beliefs
    must be coherent." 

It depends on what you mean by "coherent".  By coherent I certainly don't
mean "comprehensible".  I mean that the belief coheres with other beliefs.
It's not just a kludge "adhering" to the structure of knowledge.

∂25-Jan-83  1813	John Batali <Batali at MIT-OZ at MIT-MC> 	Objectivity, ad nauseum
Date: Tuesday, 25 January 1983, 20:47-EST
From: John Batali <Batali at MIT-OZ at MIT-MC>
Subject: Objectivity, ad nauseum
To: GAVAN at MIT-MC, BATALI at MIT-OZ at MIT-MC
Cc: JMC at SU-AI, phil-sci at MIT-OZ at MIT-MC
In-reply-to: The message of 25 Jan 83 19:56-EST from GAVAN at MIT-MC


    From: GAVAN @ MIT-MC

    I can deal effectively with my own
    goals without believing that my image of reality is objective, thank you.
    In fact, I can deal with my own goals more effectively precisely because I
    recognize that my image of reality is subjective.

I'm not claining that anyone believes or ought to believe that his image
of reality is not subjective.  Obviously not. (I HAVE been reading your
messages!)  But what is is that you have an image OF, when you have an
image of reality?  Do you have:

	1.  A subjective image of a subjective reality, or:
	2.  A subjective image of an objective reality?

If the first case, what is the point of the idea of reality at all?  Why
is it useful for anything?  It would seem that you would have a muddled
view of something that is itself muddled by its very nature.  Now I know
that this sort of attitude was very popular among German philosophers
for a while, but really, what use is it?  For example: a technical
problem:  what is the use of the two levels of subjectivity?  And what
is at the bottom, just a subjective construct, or do the levels continue
forever?  How is an agent ever to decide between looking for water and
convincing itself that it is not thirsty?  At lease in case 2, it can
consider the possibility that it might REALLY BE thirsty.

And please: I am NOT denying that our view of reality is subjective.  I
am claiming that it is useful to think of that subjective view as being
of something objective.  That is why we worry about making and testing
theories -- to build our view of reality.

In the second case, we admit that we are clouded by our subjectivity,
but act as if there is something through the clouds.  I understand the
arguments against ever "knowing" objectively.  But I can have an idea of
what sort of thing objective reality is -- of course so would someone
who doesn't believe in it.  The tree either makes a noise or it doesn't,
I admit, I don't know which.  But just in being able to phrase the
question, I suppose that an answer exists.  What use is it to suppose
that the question is, in principle, meaningless because there is no way
for me to know the answer?  Certainly, in these contrived examples (or
quantum mechanics) I must admit that there are problems.  But in the
real world, it seems always useful to supose that there are ways to find
out if statements are true.  And this is just a statement of the belief
that reality is something real we can get a "better" view of.

Note that I am arguing for the practicality of belief in an objective
reality -- not "proving" the existence of objective reality.  My earlier
message was an argument for the reality of beliefs that are practical
necessities.  I was essentially "defining" reality as the set of beliefs
it is of practical necessity to hold.  One of those beliefs is that
there are non-subjective facts.

∂25-Jan-83  1842	GAVAN @ MIT-MC 	The Objectivity of Mathematics    
Date: Tuesday, 25 January 1983  19:32-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   KDF @ MIT-OZ
Cc:   DAM @ MIT-OZ, MINSKY @ MIT-OZ, phil-sci @ MIT-OZ
Subject: The Objectivity of Mathematics
In-reply-to: The message of 24 Jan 1983  19:11-EST from KDF

    From: KDF

    Although the data is controversial, there are also strong indications
    we do not use a single, uniform way of drawing conclusions.  Our
    perceptual system seems to get into the act with diagrams, for
    instance.  The idealization of mind to a statement manipulator may
    throw away too many of the interesting phenomena.

I agree.  Although something like a logical mechanism might be innate,
there's no reason to presume that it's the whole story.  If you are
right KDF, does this have any implication for the correspondence
theory of truth?

∂25-Jan-83  1908	GAVAN @ MIT-MC 	correspondence theory   
Date: Tuesday, 25 January 1983  21:54-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   BATALI @ MIT-OZ
Cc:   John McCarthy <JMC @ SU-AI>, phil-sci @ MIT-OZ
Subject: correspondence theory  
In-reply-to: The message of 25 Jan 1983  19:26-EST from BATALI

    From: BATALI

        From: GAVAN

            From: BATALI

            No.  From Bertrand Russell:  "truth consists in some form of
            correspondence between belief and fact."  The point is that there 
            are objective facts, and the truth of sentences depends on those 
            here.

      There aren't any objective facts.  That's my argument by
      assertion.  My proof is this: No fact is objective because only
      subjects can have the concept of a "fact."  There are only subjective
      facts.  Now, try to prove that there are objective facts.

    I wasn't trying to PROVE the correspondence theory in this passage,
    only to define it.  What I said is, I think, Bertrand Russel's point.

OK, you're off the hook.  Would anyone like to attempt to prove the
correspondence theory?  Notice how ridiculous Russell's posited
correspondence between beliefs and facts is once you realize that a
"fact" is just another belief.

            Nowhere is any one-to-one correspondence assumed.  In fact, the
            correspondence theory isn't about minds at all -- it is about
            sentences or propositions or whatever can be said to be true or
            false. 

        What, other than a mind, emits sentences and propositions?  If the
        correspondence referred to is something like Tarski's "It's true that
        foo is bar iff foo is bar," then the correspondence theory is not only
        wrong, it's meaningless.

    Tarski's idea is this: "Foo is bar" is true iff foo is bar.  

That's what I said.

    The point here is the relation between a representation (the string
    "Foo is bar") with a certain state of affairs.  The distinction is
    between using the string to describe a state of affairs and mentioning
    it to discuss its truth conditions.  I don't claim that this is
    profound or even correct.  But it is meaningful: it posits a
    relationship between sentences and states of affairs they describe.

The problem, according to Putnam, is that there are too many such
correspondences for a correspondence theory to have any meaning.

    Also: I admit that minds emit and receive propositions and sentences.
    But, as Putnam argues "meanings just ain't in the head!."  The
    meanings of propositions (whatever they are) depend on external states
    of affairs.

What "external" states of affairs?  The ones you believe?  Meaning
depends on use, I thought (along with Wittgenstein).  Meanings depend
on use in a linguistic community.  Back to the consensus theory!
Russell's bug is that he believes a naive theory of meaning.  He seems
to think that words mean the same things for different people.  This
just isn't so.  To be sure, there's some overlap, but meanings are not
so consistent across individuals, space, and time.

Some people at the lab, for instance, seem to take the sentence
"You're losing" to refer to a permanent condition from which there is
no hope of recovery.  Others take it to signify a temporary condition
which might be overcome by consulting a wizard.

    Certainly the reference relationship does.  When I type GAVAN, I am
    not referring to anything in my head.

Sure you are.  

    I am referring to some person in the world, 

What world?  The one in your head?

    And the truth of statements I might make about that person depends
    on facts about that person.

What facts?  The ones you believe?

            Nowhere is any claim made about "copying" the world.

       Oh, I think Locke said something about this, as I recall.  Maybe Hume
       as well.  I'll have to look this one up.  Putnam seems to equate the
       two in *Reason, Truth, and History*.  Where have you looked?

    Hume talks about concepts "resembling" their referents.  Fodor
    actually worries quite bit about this.  "Resemblance", and "copying"
    can't be the representation relationship because they are both very
    ill defined.  

So is correspondence.

    How could some patterns of neuron firing ever resemble a
    duck?  This problem of reference is indeed a hard one, and the
    functionalist philosophers seem to be making headway at it.  But it is
    still a hard problem whether or not you are a correspondence theorist,

Yup.
            And the correspondence theory says aboslutely nothing about what or
            what not is innate.

        Well, it certainly IMPLIES something about it.

    What?  

It implies that what we believe is reality IS reality.  There's no
possibility that we might have some innate equipment (like the
concepts of space and time, or certain perceptual mechanisms) which
might distort our view of the world at the same time it helps us
perceive it.

            It seems that we might really be arguing about different things.

        Maybe.  Maybe not.  I still object to any talk about "objective" 
        reality.

    Actually (snicker), you seem to enjoy it quite a lot.  So do I.

Sometimes.  But I still object to talk about "objective" reality,
especially when coming from someone who claims what I've said is
"muddled" and "scientifically unpromising."  There's nothing more
dangerous than a scientist who claims to be objective.  See Habermas,
*Knowledge and Human Interests*.

∂25-Jan-83  2113	MINSKY @ MIT-MC 	The Objectivity of Mathematics   
Date: Tuesday, 25 January 1983  23:34-EST
Sender: MINSKY @ MIT-OZ
From: MINSKY @ MIT-MC
To:   DAM @ MIT-OZ
Cc:   phil-sci @ MIT-OZ
Subject: The Objectivity of Mathematics
In-reply-to: The message of 25 Jan 1983  18:25-EST from DAM


 DAM:	I thought the question of the human universality of sentences
     was to be taken as an empirical issue.  Even if we ignore Chomskian
     linguistics the human universality of sentences seems empirically
     undeniable.


Well, I'm denying So far as I know, the only humans who speak in
sentences are those who are learn to in cultures that already use
them.  There are many s who don't.  And normal people do not speak
exclusively in sentences all the time.  That "universal" should be
only ".

∂25-Jan-83  2134	GAVAN @ MIT-MC 	Objectivity, ad nauseum 
Date: Wednesday, 26 January 1983  00:07-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   John Batali <Batali @ MIT-OZ>
Cc:   JMC @ SU-AI, phil-sci @ MIT-OZ
Subject: Objectivity, ad nauseum
In-reply-to: The message of 25 Jan 1983 20:47-EST from John Batali <Batali>

    From: John Batali <Batali>

    . . .

    Do you have:

    1.  A subjective image of a subjective reality, or:
    2.  A subjective image of an objective reality?

Neither.  I have an image.  I also have a reality.  They are one and the same.

    . . .
    
    It would seem that you would have a muddled
    view of something that is itself muddled by its very nature.  Now I know
    that this sort of attitude was very popular among German philosophers
    for a while, but really, what use is it?  

Does everything have to have a use?  

    For example: a technical problem: what is the use of the two
    levels of subjectivity?  

Huh?  What two levels of subjectivity?  I am not a German philosopher.

    . . .

    And please: I am NOT denying that our view of reality is subjective.  

I know, but then most of my flames about this haven't been directed at you.
My opinions are closer to yours than you think.

    I am claiming that it is useful to think of that subjective view as being
    of something objective.  That is why we worry about making and testing
    theories -- to build our view of reality.

OK.  Fine.  I have no quarrel with that.  In fact, I've even said that
I can make this leap of faith, although I don't need to in order to
worry about making and testing theories.  I used to share your faith,
but I've found I don't need to.  It may have practical utility for
some.  The assumption that there's an objective reality "out there"
somewhere can keep you from having existential angst, if you're prone
to that sort of thing.

I refuse to pretend that the view of reality I'm building, or the one
you're building, or the one that anybody's building, is in any sense
objective.  I don't know about you, but I don't need to make this
assumption.  If we ARE just brains in a vat it wouldn't make any
difference to me.

    In the second case, we admit that we are clouded by our subjectivity,
    but act as if there is something through the clouds.  I understand the
    arguments against ever "knowing" objectively.  But I can have an idea of
    what sort of thing objective reality is -- of course so would someone
    who doesn't believe in it.  The tree either makes a noise or it doesn't,
    I admit, I don't know which.  But just in being able to phrase the
    question, I suppose that an answer exists.  What use is it to suppose
    that the question is, in principle, meaningless because there is no way
    for me to know the answer?  Certainly, in these contrived examples (or
    quantum mechanics) I must admit that there are problems.  But in the
    real world, it seems always useful to supose that there are ways to find
    out if statements are true.  And this is just a statement of the belief
    that reality is something real we can get a "better" view of.

I didn't say the tree question was meaningless.  In fact, I think it
means quite a lot.  I agree that it's "useful" to suppose that there
are ways to find out if a theory is true, and in my flames against the
correspondence theory I offered an alternative theory of truth.  If I
thought that one wasn't necessary I wouldn't have forwarded an
alternative.  So, in my world-view there IS a way to find out if
"statements" are true, and that way, to be sure, involves
experimentation.  But in an experiment it's not necessary to presume
that one is testing the correspondence of a set of statements (a
theory) to the "real" world.  The experimenter just tests an image
(the experiment) against the theory given his/her pre-established
beliefs.  

    Note that I am arguing for the practicality of belief in an objective
    reality -- not "proving" the existence of objective reality.  My earlier
    message was an argument for the reality of beliefs that are practical
    necessities.  I was essentially "defining" reality as the set of beliefs
    it is of practical necessity to hold.  One of those beliefs is that
    there are non-subjective facts.

I see the idea of "non-subjective facts" as a contradiction in terms.
Those beliefs we consider to be facts are the ones we all consent to,
like "snow is white."  There are also "non-subjective facts" within
certain linguistic communities that are considered subjective opinions
in other linguistic communities or in the community-at-large, like
"Skinner is wrong" or "bankers are greedy".  I might believe what you
say about "non-subjective facts" if you can tell me where the boundary
is between fact and opinion.

A point about practicality: Lots of beliefs and actions are practical,
but that doesn't necessarily make them correct or even a good idea.
For example, it's practical to believe that socialism is bad.  It's
practical to pay your taxes.  It's practical not to buck the system.
It's practical not to disagree with full professors.  When you're
around RMS it's practical to use system 91.

It would have been practical for Peirce to have kept his mouth shut
and agreed with the grand poo-bahs of his day.  But we wouldn't have
Peirce today, or even the idea of pragmatism.  It would have been
practical for Socrates to say he was sorry.  In fact, this whole
discussion reminds me of the allegory of the cave . . .

Please don't take this too personally.  It's certainly not meant that
way.  I know what you're arguing and I can see its practical utility.
But I can't see its practical NECESSITY.

∂25-Jan-83  2251	JCMa@MIT-OZ 	The Objectivity of Mathematics  
Date: Wednesday, 26 January 1983, 01:30-EST
From: JCMa@MIT-OZ
Subject: The Objectivity of Mathematics
To: GAVAN@MIT-MC
Cc: phil-sci@mc
In-reply-to: The message of 25 Jan 83 16:54-EST from GAVAN at MIT-MC


    Date: Tuesday, 25 January 1983  16:54-EST
    From: GAVAN @ MIT-MC
    In-reply-to: The message of 24 Jan 1983  18:36-EST from MINSKY

	Date: Monday, 24 January 1983  18:36-EST
	From: MINSKY

	From DAM: There is very good (I think) independent evidence for such
	      mechanisms (the universality of sentences) and further I think
	      there is good independent evidence for the innateness of
	      mathematics.

	The evidence for universality of "sentences" is pretty poor.  What there
	is evidence of is that children can learn pretty complicated speech
	patterns.  What seems innate, if anything, is "words" - compact units.
	The sentence thing only occurs in cultures, and only a portion of
	normal speech uses sentences.

    I agree.  But this doesn't mean that some sort of logical mechanism is
    not innate.  Its "sentences" may not be anything like the sentences of
    natural language or even of logic, but might instead be configurations
    of brain cells.  Is there not a logic to a DNA molecule?

The innate components are those that implement the core of the
meta-epistemology.

∂25-Jan-83  2336	JCMa@MIT-OZ at MIT-MC 	Winograd interview in Le Monde (FTPing of)
Date: Wednesday, 26 January 1983, 02:35-EST
From: JCMa@MIT-OZ at MIT-MC
Subject: Winograd interview in Le Monde (FTPing of)
To: self-org-net@MIT-OZ at MIT-MC, phil-sci-net@MIT-OZ at MIT-MC

A scribe version of the translation is available for your FTPing
pleasure from:
			  AI:COMMON;WINO GRAD

p.s. Send your thanks and complaints to HORMOZ@MC

∂26-Jan-83  0203	ISAACSON at USC-ISI 
Date: 26 Jan 1983 0128-PST
Sender: ISAACSON at USC-ISI
From: ISAACSON at USC-ISI
To: PHIL-SCI at MIT-MC
Cc: isaacson at USC-ISI
Message-ID: <[USC-ISI]26-Jan-83 01:28:25.ISAACSON>

I received last night a message from someone of the phil-sci
audience who called himself "an unsuspecting (puzzled) soul."  I
suspect he must be in good company, considering the way things
seem to develop.

He asked about DAM's syllogism, i.e.,

In reply to: Isaaacson
As I understand intutionism, the syllogism mentioned by the puzzled
soul is a perfectly good intuitionist syllogism.  It's just that
establishing "All birds fly" requires a constructive method.
       All birds fly;

       Fred is a bird;

       Then Fred flies.

and wondered if an Intuitionist form of that might be:


       Some birds fly;

       Fred is a bird;

       Maybe Fred flies.


Now, this is, essentially how I responded.

I can't speak for the Intuitionist position.  I do prefer the
second inference-chain you propose, though.  It is a *weaker*
inference, but, it is potentially generative.  It is close to a
form of inference which is sometimes called "abduction", favored
by Charles Sanders Peirce.  This sort of inference is held to
underlie hypothesis-formation.  Abductive inference can be stated
as follows:


       The surprising fact, C, is observed;

       But if A were true, C would be a matter of course;

       Hence, there is reason to SUSPECT that A is true.


The *tentative* (or hypothetical) nature of the conclusions in
the two latter cases above is what makes these inferences
potentially GENERATIVE, in the sense of knowledge-generation;
whereas the deductive template inherent in DAM's syllogism makes
it a barren exercise in classical Aristotelian logic, with no
generative power beyond the (pre-programmed, as it were)
syllogitic-chain itself.


Then I added a more technical explanation -


Let S be any given set; the variable x ranges over the objects of
S, and P(x) is a predicate of x.

In Intuitionist logic (according to Heyting, one of its bigwigs)
in order to assert,


                           for all x, P(x)

one has to provide a *constructive* [today one might say
"computable"] proof which is shown to specialize to a proof of
P(s) for each s in S.



In other words, if S is the set of all birds; and the predicate P
is "flight", before one can say "All birds fly" one has to
constructively establish the flyability of each and every bird,
including that of Fred the bird!  In other words, DAM's syllogism
is (pragmatically) just inadmissible in Intuitionist terms.


I don't know if Intuitionist would necessarily embrace the
proposed 9i.e., second) version as their own, but I doubt that
they would be terribly uncomfortable with it.

                               [END OF INTRODUCTION]

I want to use that fairly long introduction to move into a
comparison between traditional deductive inference 9such as DAM's
syllogism) and other types of inferences I consider to be
knowledge-generators, or "epistemogenic inferentialprocesses."


DAM's syllogism can be re-stated as follows,

      For all x, P(x);

      s in S;

      Then P(s).

where s = `Fred' and all else as define above.


As we just saw, the major premise above is not trivial to
establish under Intuitionist demands, and certainly cannot be
taken lightly.

DAM called his syllogistic world HYPOTHETICAL.  I grant him that
without reservation.  It is not only hypothetical, it is a
fantasy world, bearing no relation whatsoever to any empirical
evidence, and is INTENDED to be that way by the very nature of
the deductive-syllogistic beast!  The point is, of course, that
the correct way to view and use that kind of syllogism is with
the full realization that it is, indeed, INTENDED to be
content-free.  I.e., the skeletal syllogistic figure is of the
essence, and ONLY it, and any connection to the "real world"
through the interpretation of the ordinary MEANING of the English
words used to provide it with flesh is entirely inappropriate!
This singular situation would allow DAM to construct syllogisms
which are "isomorphic" to his first one with complete impunity,
at least from within the deductive world he chooses to cloister
himself in.  For example,


     All birds are five-legged mammals;

     Fred is a bird;

     Then Fred is a five-legged mammal.


Or,


     All Blacks are white;

     Fred is a Black;

     Then Fred is white.


All of the stuff you see above is entirely Kosher, viewed from
within deductive logic proper, and I have no quarrel with that.
BUT WHY TAKE EXTRA PRIDE IN PROMOTING THIS KIND OF BARREN
SYLLOGISTIC ACTIVITY WITHIN AI IS BEYOND ME!


What we sorely need are inferential processes that are capable of
generating new knowledge [through computational means!].  We need
to develop a broad class of "Epistemogenic Processes".  I think
it includes a family of inferences that can generate explanatory
hypotheses, and therefore, underlie theory-formation.

Peirce, the so-called "Father of Pragmatism" (he actually called
his creation "Pragmaticism"), devoted much of his massive
life-work to elaborating a type of inference he called
"abduction".  In his view, when contrasted with "induction" and
"deduction", it is the only truly creative mode of inference.  It
is THE epistemogenic agent.  The sort that yields new explanatory
hypotheses in scientific inquiry.  As a corollary he developed a
theory of the "Economy of Research", an obscure and understudied,
yet incredibly rich, research methodology.


I do agree with Minsky that we ought to be courageous and
resourceful enough to be willing to break new ground, without too
many hangups about "old stuff".  Yet, I think that we have an
incredibly fertile resouce in Peirce, and we owe it to our
enterprise to COHERE what we are trying to do with what he has
already accomplished.


∂26-Jan-83  1825	←Bob <Carter at RUTGERS> 
Date: 26 January 1983  11:47-EST (Wednesday)
Sender: CARTER at RU-GREEN
From: ←Bob <Carter at RUTGERS>
To:   ISAACSON at USC-ISI
Cc:   PHIL-SCI at MIT-MC


I am the anonymous naif to whom ISAACSON refers, and I thank him and
DAM for their private explanations, and him again for public
amplification.

Another question:  Would it be wrong to identify "Some Elephants
Exist" as (potentially, anyhow) part of some epistemogenic
inferential process, and DAM's tautological syllogism with "and Two
is FOUR?"

←Bob

∂26-Jan-83  1840	John McCarthy <JMC@SU-AI> 	intuitionism      
Date: 26 Jan 83  1121 PST
From: John McCarthy <JMC@SU-AI>
Subject: intuitionism  
To:   isaacson%USC-ISI@MIT-MC
CC:   phil-sci%MIT-OZ@MIT-MC    

In reply to: Isaaacson
As I understand intutionism, the syllogism mentioned by the puzzled
soul is a perfectly good intuitionist syllogism.  It's just that
establishing "All birds fly" requires a constructive method.
       All birds fly;

       Fred is a bird;

       Then Fred flies.


∂26-Jan-83  1847	ISAACSON at USC-ISI 	Re:  intuitionism  
Date: 26 Jan 1983 1338-PST
Sender: ISAACSON at USC-ISI
Subject: Re:  intuitionism
From: ISAACSON at USC-ISI
To: JMC at SU-AI
Cc: phil-sci at MIT-MC, isaacson at USC-ISI
Message-ID: <[USC-ISI]26-Jan-83 13:38:33.ISAACSON>


In-Reply-To: Your message of Wednesday, 16 Jan 1983, 11:21-PST


I'm in complete agreement.


The question, though, is would someone like DAM bother to worry
about constructive methods to establish such premises in the
first place.


p.s.  My net-address is ISAACSON at USC-ISI


∂26-Jan-83  2057	KDF @ MIT-MC 	The Objectivity of Mathematics 
Date: Wednesday, 26 January 1983  14:52-EST
Sender: KDF @ MIT-OZ
From: KDF @ MIT-MC
To:   GAVAN @ MIT-OZ
Cc:   DAM @ MIT-OZ, MINSKY @ MIT-OZ, phil-sci @ MIT-OZ
Subject: The Objectivity of Mathematics
In-reply-to: The message of 25 Jan 1983  19:32-EST from GAVAN

	I don't think whether or not we use, or to what extent we use,
innate logical mechanisms has anything to do with the correspondence
theory of truth.  The semantical story seems more or less independent
from what how our inferential mechanisms actually work, but instead
with whether we think they give adequate/true/useful results or not.

∂26-Jan-83  2125	GAVAN @ MIT-MC 	The Objectivity of Mathematics    
Date: Wednesday, 26 January 1983  14:56-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   JCMa @ MIT-OZ
Cc:   phil-sci @ mc
Subject: The Objectivity of Mathematics
In-reply-to: The message of 26 Jan 1983 01:30-EST from JCMa

    From: JCMa

        From: GAVAN

        Is there not a logic to a DNA molecule?

    The innate components are those that implement the core of the
    meta-epistemology.

Care to elaborate?

∂26-Jan-83  2134	GAVAN @ MIT-MC 
Date: Wednesday, 26 January 1983  15:40-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   ISAACSON @ USC-ISI
Cc:   PHIL-SCI @ MIT-MC
In-reply-to: The message of 26 Jan 1983  04:28-EST from ISAACSON at USC-ISI

    From: ISAACSON at USC-ISI

    . . .

    I do agree with Minsky that we ought to be courageous and
    resourceful enough to be willing to break new ground, without too
    many hangups about "old stuff".  Yet, I think that we have an
    incredibly fertile resouce in Peirce, and we owe it to our
    enterprise to COHERE what we are trying to do with what he has
    already accomplished.

I agree about Peirce wholeheartedly, and considering BATALI's
submissions about practicality, I would think he would agree also.  I
too agree with Marvin's predilection against the "old stuff", but with
certain reservations.  I've said before on another list that we
shouldn't throw the baby out with the bathwater, and I think there's a
real danger of doing this.  Anyway, I want to raise a different,
albeit related, point.

The danger of getting lost in the "old stuff" seems related to another
danger I've been reminded of by recent submissions about language.
There seems to me to be a real danger in drawing conclusions about
"the way the mind works" that are polluted by the peculiarities of the
English language and of the Western techno-rational languages in
general (I'm referring, of course, to the work of Benjamin Lee Whorf).
I don't mean that we should all learn to speak Hopi or Nootka or
something, but I do think that we should be careful not to reify in
our models and to posit as a universal mechanism of mind, phenomena
that are particular to our language and related languages (even if
we're modeling English text).  When people speak of sentences and of
subjects and verb phrases I wonder what they mean.  

I cannot hope to comment on the universality of sentences until I
understand just what is meant by the term "sentence."

∂26-Jan-83  2134	MINSKY @ MIT-MC
Date: Wednesday, 26 January 1983  18:23-EST
Sender: MINSKY @ MIT-OZ
From: MINSKY @ MIT-MC
To:   ISAACSON @ USC-ISI
Cc:   PHIL-SCI @ MIT-MC
In-reply-to: The message of 26 Jan 1983  04:28-EST from ISAACSON at USC-ISI


ISAACSON:  In (Pierce's) view, when contrasted with "induction" and
     "deduction", it is the only truly creative mode of inference.  It
     is THE epistemogenic agent.  The sort that yields new explanatory
     hypotheses in scientific inquiry.  As a corollary he developed a
     theory of the "Economy of Research", an obscure and understudied,
     yet incredibly rich, research 


Bravo to JDI's other remarks.  And perhaps Pierce discovered seomthing
incredibly rich - I haven't encountered it, but also haven't been
convinced to invest in the search.  Two remarks:

1.  The goal of finding one, or a very few forms of inference seems
unrealistic to me.  To mathematically-oriented scientists, the virtue of
compact formulations is to prove theorems about them.  This chronically
confused with the goal of using ideas or knowledge to get new ideas
and knowledge.  Fo the latter, I suspect, we need to a wide 

∂26-Jan-83  2257	ISAACSON at USC-ISI 	Correction:  Heyting ==> Beth
Date: 26 Jan 1983 1724-PST
Sender: ISAACSON at USC-ISI
Subject: Correction:  Heyting ==> Beth
From: ISAACSON at USC-ISI
To: phil-sci at MIT-MC
Cc: isaacson at USC-ISI
Message-ID: <[USC-ISI]26-Jan-83 17:24:48.ISAACSON>


Sometime ago I mentioned a collaboration between a leading
intuitionst and Jean Piaget.  I mistakenly named Heyting, whereas
actually it was Evert Beth, who died unexpectedly in 1964.
Piaget's account of that joint effort is of some interest (in the
context of discussing scientific communities).

"In 1950 I published a work on the operational mechanisms of
logic, which my publisher decided to call Traite de Logique: Beth
criticised it very severly in the journal Methodos.  Father
Bochenski, who had requested this review, refused to publish my
reply, which I then reduced to a few lines, saying, in effect,
that if two authors fail to understand each other because their
points of view are so divergent, the only way of achieving some
useful and objective result is for them to co-operate in the
preparation of a joint work, where the same data are investigated
one by one until a mutually satisfactory assimilation of their
positions is reached.  It was along such lines that I wrote to
Beth and invited him..."

Ten years later they published a joint volume, later translated
into English: Mathematical Epistemology and Psychology E. W. Beth
/ Jean Piaget Gordon and Breach, New York, 1966 (326 pp)


∂26-Jan-83  2320	ISAACSON at USC-ISI 	Epistemogenic Stuff
Date: 26 Jan 1983 1323-PST
Sender: ISAACSON at USC-ISI
Subject: Epistemogenic Stuff
From: ISAACSON at USC-ISI
To: Carter at RUTGERS
Cc: PHIL-SCI at MIT-MC, isaacson at USC-ISI
Message-ID: <[USC-ISI]26-Jan-83 13:23:04.ISAACSON>
Redistributed-To: phil-sci at MIT-MC
Redistributed-By: ISAACSON at USC-ISI
Redistributed-Date: 26 Jan 1983


In-Reply-To: Your message of Wednesday, 26 Jan 1983, 11:47-EST


You are welcome.


"Would it be wrong to identify "Some Elephants Exist" as
(potentially, anyway) part of some epistemogenic inferential
process...  "


I think that existential statements such as this one *are*
useable withinepistemogenic inferential processes.  I think that
some would say, though, that it should be supported by at least
*some* "intuition" [Oh, please don't ask me what that word
means...  I guess, it is itself some kind of a "hunch" or
"hypothesis" (?)] about the existence of constructive (or
computational) means to effectively verify the existence of at
least one such "Elephant".

Also, a guiding principle from Peirce's "Economy of Research"
should be observed, I think.  That is, on "pragmatic" grounds,
one should not conclude that *any* existential statement of the
figure "Some S Exist" has the same epistemogenic "strength".
There are gradations of strengths, relative to a given inquiry,
from very high to completely useless.  For example, one should
not jump to the conclusion that "Some Foo's Exist" is on equal
footing.

[It reminds me of the story about the guy who appeared to be
catching invisible insects in a room to the annoyance of everyone
and who claimed that he was catching "Yoopchicks".  When asked
what are YOOPCHICKS he replied: "I'll tell you AFTER I catch
one."

Well, I thought it is not an old joke in this country too...  ]


I'll let DAM respond to the second part of your question.


-- JDI


∂26-Jan-83  2323	ISAACSON at USC-ISI 	Re:  intuitionism  
Date: 26 Jan 1983 1338-PST
Sender: ISAACSON at USC-ISI
Subject: Re:  intuitionism
From: ISAACSON at USC-ISI
To: JMC at SU-AI
Cc: phil-sci at MIT-MC, isaacson at USC-ISI
Message-ID: <[USC-ISI]26-Jan-83 13:38:33.ISAACSON>
Redistributed-To: phil-sci at MIT-MC
Redistributed-By: ISAACSON at USC-ISI
Redistributed-Date: 26 Jan 1983


In-Reply-To: Your message of Wednesday, 16 Jan 1983, 11:21-PST


I'm in complete agreement.


The question, though, is would someone like DAM bother to worry
about constructive methods to establish such premises in the
first place.


p.s.  My net-address is ISAACSON at USC-ISI


∂26-Jan-83  2326	MINSKY @ MIT-MC
Date: Thursday, 27 January 1983  00:39-EST
Sender: MINSKY @ MIT-OZ
From: MINSKY @ MIT-MC
To:   MINSKY @ MIT-OZ
Cc:   ISAACSON @ USC-ISI, PHIL-SCI @ MIT-MC
In-reply-to: The message of 26 Jan 1983  18:23-EST from MINSKY


Sorry about message interrupted mysteriously.  Worse, I forgot what
point 2 was.

What I was trying to say is that I think Philosophy has become badly
screwed up, in getting confused between finding mechanisms for
plausible inference - and in trying to find ways to prove that
proposed mechanisms are sound.

The result of this confusion is that Philosophers have tended to
consider only inferential systems that are too simplistic to work -
because of the vision of proving things about them.  For example,
there are some plausible inference in my paper of Frames that many
people have found to be quite powerful - e.g., assignments, excuses,
difference networks, uniframing schemes.  

On the other hand, so many technical problems remain in converting
from deduction schemes, or abduction schemes, or the dialectical
schemes I've seen, that (in my view) common prudence would suggest
that these are relics from an early pre-computational era.

In other words, the schemes for learning and reasoning in those
discussions from the past have not turned out scientifically sound.
(McCarthy believes that they may be repairable, with enough further
work.  I think he's right, but (dialectically) the repairs will make
them at least as complicated and hard to prove theorems about, in the
end, as the kind I already working better now.

I found, In Freud's early thinking, what I consider good ideas.  But
his later work was handicapped by technical prematurity and I have not
found, nor believe it has, as much value.  The work of his followers,
like the Philosophers I complain of above, seems to almost entirely
worthless, since it seems mainly haggling about how to salvage his
early intuitions to fit other situations, and "prove" that his ideas
can so account fotr things, which they almost surely can't.

So now, though it may seem ignorant and arrogant, I tend to conjecture
that the alleged treaures in Peirce and Kant and others have the same
character.  As Feynman has said of pre-quantum mechanics, their work
was elegant and beautiful but the world view was inadequate and it is
ier to start afresh, put the new ideas first, and (RPF id not say
this) let the historians try to show that there was nothing new if they
can.

∂27-Jan-83  0151	ISAACSON at USC-ISI 	Epistemogenic Stuff
Date: 26 Jan 1983 1323-PST
Sender: ISAACSON at USC-ISI
Subject: Epistemogenic Stuff
From: ISAACSON at USC-ISI
To: Carter at RUTGERS
Cc: PHIL-SCI at MIT-MC, isaacson at USC-ISI
Message-ID: <[USC-ISI]26-Jan-83 13:23:04.ISAACSON>


In-Reply-To: Your message of Wednesday, 26 Jan 1983, 11:47-EST


You are welcome.


"Would it be wrong to identify "Some Elephants Exist" as
(potentially, anyway) part of some epistemogenic inferential
process...  "


I think that existential statements such as this one *are*
useable withinepistemogenic inferential processes.  I think that
some would say, though, that it should be supported by at least
*some* "intuition" [Oh, please don't ask me what that word
means...  I guess, it is itself some kind of a "hunch" or
"hypothesis" (?)] about the existence of constructive (or
computational) means to effectively verify the existence of at
least one such "Elephant".

Also, a guiding principle from Peirce's "Economy of Research"
should be observed, I think.  That is, on "pragmatic" grounds,
one should not conclude that *any* existential statement of the
figure "Some S Exist" has the same epistemogenic "strength".
There are gradations of strengths, relative to a given inquiry,
from very high to completely useless.  For example, one should
not jump to the conclusion that "Some Foo's Exist" is on equal
footing.

[It reminds me of the story about the guy who appeared to be
catching invisible insects in a room to the annoyance of everyone
and who claimed that he was catching "Yoopchicks".  When asked
what are YOOPCHICKS he replied: "I'll tell you AFTER I catch
one."

Well, I thought it is not an old joke in this country too...  ]


I'll let DAM respond to the second part of your question.


-- JDI


∂27-Jan-83  0728	ISAACSON at USC-ISI 	First Peirce - Then the Bible!    
Received: from MIT-MC by SU-AI with NCP/FTP; 27 Jan 83  07:28:02 PST
Date: 27 Jan 1983 0710-PST
Sender: ISAACSON at USC-ISI
Subject: First Peirce - Then the Bible!
From: ISAACSON at USC-ISI
To: MINSKY at MIT-MC
Cc: phil-sci at MIT-MC, isaacson at USC-ISI
Message-ID: <[USC-ISI]27-Jan-83 07:10:48.ISAACSON>


In-Reply-To: Your messages of last night


I'm gratified to see you agree with the bulk of what I said.  I
guess it's no wonder.  After all, part of what I had said was
that I agreed with part of what you had said.  I don't know
whether they call it positive feedback, dialectical convergence,
or what, but it's sure nice to be developing at least partial
agreements in confounding debates.


You said something about seeming ignorant and arrogant -

I think I mentioned to you on another list that my entire formal
academic training is in straight engineering (well, with a lot on
math thrown in).  Never took a single philosophy course.  In
fact, when I took my first two degrees, at the Israel Institute
of Technology, they taught no humanities at all!  The only
non-science, non-engineering, non-math, courses I ever took
beyond high-school were a single course in economics and one year
of technical French.  So please don't waive your "ignorance" at
me because I'll race you on that issue and defeat you in no
time...

As to being arrogant - in my experience, no one who admits THAT
can possibly also be THAT.

So the issues cannot possibly be ignorance or arrogance.  It must
be deeper than that, if I may say so, and there is no way in the
world that I'm going to tell you that I know that your intuition
is wrong here, because I simply don't.

On the whole I empathize with your predicament.  Let me put it
this way [and having some fun can't possibly hurt any of us, I
can tell]: My position is profoundly against promoting philosophy
or pragmatism pre-maturely.  For it is positively and patently
NOT pragmatic to push professionals [of any kind, including
Engineers, per example] into Peirce's Pragmatism to promote their
pragmatic predisposition to Problem-Solving.  They're perfectly
positioned by prior professional predilections toward pragmatic
professional performance.  [Perhaps Papert please provide a
pretty professorial proof to the present pragmatic principle...
Well, enough of that].


So Levitt can relax and so can I, and hordes of others who are not
professional philosophers and don't care to become any (visible)
fraction thereof.

Yet, why do I think that we do need a *reasonable* interaction
with judiciously *selected* philosophical ideas.  It has to do
with an effective way I perceive for knowledge-generation that
uses metaphoric devices (I really should say processes) through
what I sometime call "dialectical bootstrapping" or "metaphoric
switching".

[Some of this was developed in discussions with Gavan and JCMa
and I wish to give them credit for stimulating some of these
thoughts without saddling them with any deficiencies which may
be detected by the ever-watchful audience.]

In a nutshell [and this is really very sketchy and probably
sounds improbable at first reading], in order to develop a new
epistemic context (in this case, the body of knowledge relating
to computational "mind machines") I think that one needs "strong"
and fertile metaphors from other contexts, just to start the
wheels spinning.  I think that *portions* of certain
philosophical epistemologies, however imperfect, inconclusive,
and what not, can serve as effective *initial* metaphors for our
enterprise.  As we progress with the "mind machines" we can use
them in turn and switch their roles to use them as metaphors to
make refinements in the original philosophical contexts, and so
on.  [From your comment on McCarthy's approach I can't tell if he
is inclined to go along with this, but this is a good time to
ask.]

The game does not have to be played only between mind-machines
and philosophy.  It can also be played between mind-machines and
cognitive science and so on.  In fact, PRAGMATICALLY, I'd
emphatically use *any* foreign (i.e., external to mind-machines)
context that can give me high metaphoric strength, at a given
stage of development, to advance the progress of the center of
our concern, i.e., the mind-machine.  And I really don't care
what it is, and I don't have to BELIEVE in it per se, provided it
does the job for me in the area of chief concern.  And yes, you
guessed it, I'd use the Bible itself if I have to.

∂27-Jan-83  1542	DAM @ MIT-MC 	The Objectivity of Mathematics 
Received: from MIT-ML by SU-AI with NCP/FTP; 27 Jan 83  15:42:10 PST
Date: Thursday, 27 January 1983  17:44-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   MINSKY @ MIT-OZ
cc:   phil-sci @ MIT-OZ
Subject: The Objectivity of Mathematics

 	Date: Tuesday, 25 January 1983 23:34-EST
	From: MINSKY

	Well, I'm denying So far as I know, the only humans who speak in
	sentences are those who are learn to in cultures that already use
	them.  There are many s who don't.  And normal people do not speak
	exclusively in sentences all the time.  That "universal" should be
	only ".

I am still confused as to what you are saying. Which of the
following positions do you take?

1)  There are human cultures in which members rarely (if ever)
use sentences.

2)  All human cultures use sentences but an individual raised in
the absence of culture would not use them.

	I think the first position is wrong and the second is irrelevent
(a person raised in the dark does not develop a normal (innate)
visual system).  The universality (i.e. present in all cultures) of
sentences is, I think, good empirical evidence for innate cognitive
mechanisms.

	David Mc

∂27-Jan-83  1550	DAM @ MIT-MC 	intuitionism    
Received: from MIT-ML by SU-AI with NCP/FTP; 27 Jan 83  15:49:56 PST
Date: Thursday, 27 January 1983  17:54-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   ISAACSON @ USC-ISI
cc:   phil-sci @ MIT-OZ
Subject: intuitionism


	The question, though, is would someone like DAM bother to worry
	about constructive methods to establish such premises in the
	first place.

	The answer to this question depends on what the statement
"all foos are grithcy" is taken to mean (in a precise way).  I do
not understand the details of intuitionistic logic but I do know
that it can be given a semantics.  Thus the same sentence can mean
two different things and a proof of one meaning is not necessarily
a proof of the other meaning.  If I were to try to proof that the
intuitionistic meaning was true I would use different proof techniques
than when I try to prove tat the standard reading holds.
	The real difference between intuitionistic logic and "normal"
logic concerns the meaning of negation (not universal quantification).
The semantics for intuitionistic logic is based on Kripke structures
similar to those used in the semantics for modal logic.

	David Mc

∂27-Jan-83  1607	DAM @ MIT-MC 	Earlier Work    
Received: from MIT-ML by SU-AI with NCP/FTP; 27 Jan 83  16:06:54 PST
Date: Thursday, 27 January 1983  18:41-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   MINSKY @ MIT-OZ
cc:   phil-sci @ MIT-OZ
Subject: Earlier Work


	Date: Thursday, 27 January 1983  00:39-EST
	From: MINSKY

	The result of this confusion is that Philosophers have tended to
	consider only inferential systems that are too simplistic to work -
	because of the vision of proving things about them.

You mean like Solomonoff complexity theory.

	I found, In Freud's early thinking, what I consider good ideas.  But
	his later work was handicapped by technical prematurity and I have not
	found, nor believe it has, as much value.

I agree that premature precision can handicap a good idea.  I think
the conversion of the notion of Occam's razor into Solomonoff complexity
theory is a good example.

	I tend to conjecture
	that the alleged treaures in Peirce and Kant and others have the same
	character.  As Feynman has said of pre-quantum mechanics, their work
	was elegant and beautiful but the world view was inadequate.

	Quantum mechanics could never have gotten off the ground
without sophisticated pre-existing mathematical and physical ideas.  It
is true that a lot of physical theories had to be thrown away but much
of the pre-existing highly developed theory was absolutely essential
and remained intact.  What should we throw out now in AI and what
should we keep?  I do not think that discussions at this level of
abstraction can provide arguments for particular positions in AI.

	David Mc

∂27-Jan-83  1616	DAM @ MIT-MC 	Summary    
Received: from MIT-ML by SU-AI with NCP/FTP; 27 Jan 83  16:15:56 PST
Date: Thursday, 27 January 1983  19:00-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   MINSKY @ MIT-OZ
cc:   phil-sci @ MIT-OZ
Subject: Summary


	I believe that the notion of setence, statement, and Tarskian
truth will be as important to AI as Hilbert spaces and operators are
in quantum mechanics.  I have tried to present evidence for the
importance of these notions (the human universality of sentences and
the apparent innateness of mathematical truth).  I am not saying that
these notions currently provide a plausible theory of cognition, just
as Hilbert spaces do not by themselves provide a theory of quantum
physics.
	You have made it clear that you think these notions are not very
important and that the only "old" mathematical notions you do consider
important are the fundamental ideas in computation theory (simple models
of computation).  You claim that new revolutionary ideas you have been
developing are independent of other old notions that you are making
more progress toward constructing a real artificial intelligence than
people using old notions (e.g. the Tarskian notion of truth).

	I think that the construction of a real artificial intelligence
requires that one understand sophisticated innate mechanisms (though
I don't claim to know what those mechanisms are).  Thus I think that
real success in AI is not likely to be near at hand.
	You seem to discount the importance of sophisticated innate
mechanisms and (in my interpretation) feel that a relatively simple
highly parallel architecture will do the job.  Thus you seem to think
that real success in AI is likely to be near at hand.


	I am unimpressed by your claims about your ideas.

	You are unimpressed by my arguments for the importance of the
"old" ideas.


	Does this sound like a fair summary to you?

		David Mc

∂27-Jan-83  2024	MINSKY @ MIT-MC 	The Objectivity of Mathematics   
Received: from MIT-ML by SU-AI with NCP/FTP; 27 Jan 83  20:24:23 PST
Date: Thursday, 27 January 1983  22:44-EST
Sender: MINSKY @ MIT-OZ
From: MINSKY @ MIT-MC
To:   DAM @ MIT-OZ
Cc:   phil-sci @ MIT-OZ
Subject: The Objectivity of Mathematics
In-reply-to: The message of 27 Jan 1983  17:44-EST from DAM


DAM: The universality (i.e. present in all cultures) of sentences is,
     I think, good empirical evidence for innate cognitive mechanisms.


Really, now.  I'm sure all cultures have words for MOTHER and CHILD.
It is in the nature of things that all cultures have mothers and
children.  Is that good "empirical evidence" that those words are
innate?

∂27-Jan-83  2028	MINSKY @ MIT-MC 	Summary 
Received: from MIT-ML by SU-AI with NCP/FTP; 27 Jan 83  20:28:29 PST
Date: Thursday, 27 January 1983  22:53-EST
Sender: MINSKY @ MIT-OZ
From: MINSKY @ MIT-MC
To:   DAM @ MIT-OZ
Cc:   phil-sci @ MIT-OZ
Subject: Summary
In-reply-to: The message of 27 Jan 1983  19:00-EST from DAM


DAM:	You seem to discount the importance of sophisticated innate
        mechanisms and (in my interpretation) feel that a relatively
        simple highly parallel architecture will do the job.  Thus you
        seem to think that real success in AI is likely to be near at
        hand.

	I am unimpressed by your claims about your ideas.


I realize for the first time that you do not know much about my ideas.
If you examine Learning Meaning, you will find that I have probably
proposed the most elaborate innate mechanisms in the literature.  My
position is that " ones proposed by you - such as sentences - are too
simplistic since they don't explain much linguistic behavior.

The idea that some simple parallel architecture will suffice is surely
someone else's.  The idea that real in AI is less than 50 to several
hundred years is not characteristic of me.

I cannot imagine where you get your stereotypes.

∂28-Jan-83  0215	GAVAN @ MIT-MC 	Earlier Work  
Received: from MIT-ML by SU-AI with NCP/FTP; 28 Jan 83  02:15:43 PST
Date: Friday, 28 January 1983  05:03-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   DAM @ MIT-OZ
Cc:   MINSKY @ MIT-OZ, phil-sci @ MIT-OZ
Subject: Earlier Work
In-reply-to: The message of 27 Jan 1983  18:41-EST from DAM

    From: DAM

        From: MINSKY

    	I tend to conjecture
    	that the alleged treaures in Peirce and Kant and others have the same
    	character.  As Feynman has said of pre-quantum mechanics, their work
    	was elegant and beautiful but the world view was inadequate.

    	Quantum mechanics could never have gotten off the ground
    without sophisticated pre-existing mathematical and physical ideas.  It
    is true that a lot of physical theories had to be thrown away but much
    of the pre-existing highly developed theory was absolutely essential
    and remained intact.  What should we throw out now in AI and what
    should we keep?

I'm afraid I have to agree with DAM.  What, in particular, do you find
inadequate about the world-views of Kant, Peirce, and the others?  Do
you find these inadequacies to be so pervasive that there is nothing
to be learned from these philosophers?  Doesn't the recognition of an
inadequacy point the way to better ideas?  Don't you yourself have any
intellectual debts?

∂28-Jan-83  0221	GAVAN @ MIT-MC 	The Objectivity of Mathematics    
Received: from MIT-ML by SU-AI with NCP/FTP; 28 Jan 83  02:20:55 PST
Date: Friday, 28 January 1983  05:10-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   KDF @ MIT-OZ
Cc:   DAM @ MIT-OZ, MINSKY @ MIT-OZ, phil-sci @ MIT-OZ
Subject: The Objectivity of Mathematics
In-reply-to: The message of 26 Jan 1983  14:52-EST from KDF

    From: KDF

    	I don't think whether or not we use, or to what extent we use,
    innate logical mechanisms has anything to do with the correspondence
    theory of truth.  

Isn't it possible that, if there are innate logical mechanisms (or
even alogical mechanisms), they could distort or perceptual equipment
so radically that nothing we think or say could ever (except by
chance) correspond to anything in "external reality" (assuming there
is such a thing)?

    The semantical story seems more or less independent
    from what how our inferential mechanisms actually work, but instead
    with whether we think they give adequate/true/useful results or not.

Please explain what you mean here.  Don't you think that what we
denote to be an adequate,true, or useful result has something to do
with how our inferential mechanisms work?  Perhaps I'm
misunderstanding you.

∂28-Jan-83  0836	DAM @ MIT-MC 	Summary    
Received: from MIT-ML by SU-AI with NCP/FTP; 28 Jan 83  08:35:51 PST
Date: Friday, 28 January 1983  11:21-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   MINSKY @ MIT-OZ
cc:   phil-sci @ MIT-OZ
Subject: Summary


	Date: Thursday, 27 January 1983  22:53-EST
	From: MINSKY

	I realize for the first time that you do not know much about my ideas.
	If you examine Learning Meaning, you will find that I have probably
	proposed the most elaborate innate mechanisms in the literature.  My
	position is that " ones proposed by you - such as sentences - are too
	simplistic since they don't explain much linguistic behavior.

	Well I must admitt that I have not studied your work.  However
it would be useful if you would comment on the major portion of my summary
which I will restate (slightly rephrased) here:

	I believe that the notion of setence, statement, and Tarskian
truth will be important to AI, not that they provide of cognition
or language but that there is clear evidence for innate structures
which involve these notions (the existence of objective
judgements about language and mathematical truth).

	You have made it clear that you think these "old" notions are not
very important because they fail to provide plausible theories
of cognition and language.  You claim that you are making more progress
toward constructing a real artificial intelligence using new
revolutionary ideas which are independent of the notions
of sentence and truth.
	

Would you consider this a fair summary?  Do you really think that
your ideas provide a better explanation of linguistic behavior than those
that have been built on the notion of sentence?  I will look more
carefully at your ideas.

	David Mc

∂28-Jan-83  0845	DAM @ MIT-MC 	Sentences  
Received: from MIT-ML by SU-AI with NCP/FTP; 28 Jan 83  08:45:20 PST
Date: Friday, 28 January 1983  11:32-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   MINSKY @ MIT-OZ
cc:   phil-sci @ MIT-OZ
Subject: Sentences


	Date: Thursday, 27 January 1983  22:44-EST
	From: MINSKY

	DAM: The universality (i.e. present in all cultures) of sentences is,
	     I think, good empirical evidence for innate cognitive mechanisms.


	Really, now.  I'm sure all cultures have words for MOTHER and CHILD.
	It is in the nature of things that all cultures have mothers and
	children.  Is that good "empirical evidence" that those words are
	innate?

	So you think that, independent of innate structures, it is in the
nature of cultures (networks of cognitive agents) that they use sentences.
This seems to be a stronger claim about the ultimate importance of sentences
than the claim that they reflect innate structures.
	It seems to me that the existence of the words "mother" and "child"
would tell a lot to an alien about our innate biology.

	David Mc

∂28-Jan-83  0901	GAVAN @ MIT-MC 	Sentences
Received: from MIT-ML by SU-AI with NCP/FTP; 28 Jan 83  09:01:31 PST
Date: Friday, 28 January 1983  11:48-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   DAM @ MIT-OZ
Cc:   MINSKY @ MIT-OZ, phil-sci @ MIT-OZ
Subject: Sentences
In-reply-to: The message of 28 Jan 1983  11:21-EST from DAM

    From: DAM

    	I believe that the notion of setence, statement, and Tarskian
    truth will be important to AI, not that they provide of cognition
    or language but that there is clear evidence for innate structures
    which involve these notions (the existence of objective
    judgements about language and mathematical truth).

How can a judgment be objective?  Objects don't make judgments.  I
still don't know what you mean by "sentence" and "statement".  Just
what does Tarskian truth have to do with anything?  What is the "clear
evidence for innate structures which involve these notions"?

    	You have made it clear that you think these "old" notions are not
    very important because they fail to provide plausible theories
    of cognition and language.  You claim that you are making more progress
    toward constructing a real artificial intelligence using new
    revolutionary ideas which are independent of the notions
    of sentence and truth.

Well, we've already gone a fair bit on the meaning of "truth," but
just what constitutes a sentence?  Does it have to be well-formed,
whatever that means?  


Although I generally disagree with Marvin with regard to the value of
those stodgy old "pre-computational" philosophers (as if you'd have
had computers without them), I can agree with him when it comes to the
limits for formal logic.  How weak or strong is the mapping from
natural language to logic?  How can natural language be formalized
when we each derive different meanings from the same configurations of
word-tokens, and when word-tokens attach to themselves new meanings
every day?

∂28-Jan-83  0905	GAVAN @ MIT-MC 	Sentences
Received: from MIT-ML by SU-AI with NCP/FTP; 28 Jan 83  09:04:59 PST
Date: Friday, 28 January 1983  11:52-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   DAM @ MIT-OZ
Cc:   MINSKY @ MIT-OZ, phil-sci @ MIT-OZ
Subject: Sentences
In-reply-to: The message of 28 Jan 1983  11:32-EST from DAM

    From: DAM

    It seems to me that the existence of the words "mother" and "child"
    would tell a lot to an alien about our innate biology.

Assuming the alien didn't notice the biology first and the speech
later.

∂28-Jan-83  1150	ISAACSON at USC-ISI 	Job Numbers   
Received: from MIT-MC by SU-AI with NCP/FTP; 28 Jan 83  11:50:38 PST
Date: 28 Jan 1983 1121-PST
Sender: ISAACSON at USC-ISI
Subject: Job Numbers
From: ISAACSON at USC-ISI
To: GAVAN at MIT-MC
Cc: phil-sci at MIT-MC, isaacson at USC-ISI
Message-ID: <[USC-ISI]28-Jan-83 11:21:25.ISAACSON>


In-Reply-To: Your message of Friday, 28 Jan 1983, 04:58-EST


GAVAN: "What does Job have to say about how the mind work?"


Short of actually invoking the Bible [separation of DoD and
religion!], here is a little syllogistic job:


All Runnable Jobs can Interpret Numbers;

Job can Run;

The Job can Interpret Numbers.



∂28-Jan-83  1232	GAVAN @ MIT-MC 
Received: from MIT-MC by SU-AI with NCP/FTP; 28 Jan 83  12:30:14 PST
Date: Friday, 28 January 1983  05:48-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   MINSKY @ MIT-OZ
Cc:   ISAACSON @ USC-ISI, PHIL-SCI @ MIT-MC
In-reply-to: The message of 26 Jan 1983  18:23-EST from MINSKY

    From: MINSKY

    ISAACSON:  In (Pierce's) view, when contrasted with "induction" and
         "deduction", it is the only truly creative mode of inference.  It
         is THE epistemogenic agent.  The sort that yields new explanatory
         hypotheses in scientific inquiry.  As a corollary he developed a
         theory of the "Economy of Research", an obscure and understudied,
         yet incredibly rich, research 


    Bravo to JDI's other remarks.  And perhaps Pierce discovered seomthing
    incredibly rich - I haven't encountered it, but also haven't been
    convinced to invest in the search.

If you were trained as a scientist in the United States, then you HAVE
encountered Peirce's work, although he probably wasn't cited.  Judging
from what I've read of "Learning Meaning" so far, you have picked up a
lot from him, probably by osmosis.

    1.  The goal of finding one, or a very few forms of inference seems
    unrealistic to me.  

Whose goal is this?  Certainly not Peirce's.  

    To mathematically-oriented scientists, the virtue of
    compact formulations is to prove theorems about them.  This chronically
    confused with the goal of using ideas or knowledge to get new ideas
    and knowledge.  Fo the latter, I suspect, we need to a wide 

Well, I see where you're headed here, and I agree.  But I don't see what
it has to do with Kant, Peirce, and the others.

∂28-Jan-83  1253	GAVAN @ MIT-MC 	First Peirce - Then the Bible!    
Received: from MIT-MC by SU-AI with NCP/FTP; 28 Jan 83  12:52:51 PST
Date: Friday, 28 January 1983  04:53-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   ISAACSON @ USC-ISI
Cc:   MINSKY @ MIT-OZ, phil-sci @ MIT-MC
Subject: First Peirce - Then the Bible!
In-reply-to: The message of 27 Jan 1983  10:10-EST from ISAACSON at USC-ISI

    From: ISAACSON at USC-ISI

    The game does not have to be played only between mind-machines
    and philosophy.  It can also be played between mind-machines and
    cognitive science and so on.  

Yeah, like mathematics, anthropology, sociology, etc., etc.  Academic
cubbyholes are just that -- cubbyholes.  It's all philosophy (love of
knowledge) anyway.  And all the disciplines emerged either from
philosophy or from disciplines that emerged from philosophy.

    In fact, PRAGMATICALLY, I'd
    emphatically use *any* foreign (i.e., external to mind-machines)
    context that can give me high metaphoric strength, at a given
    stage of development, to advance the progress of the center of
    our concern, i.e., the mind-machine.  And I really don't care
    what it is, and I don't have to BELIEVE in it per se, provided it
    does the job for me in the area of chief concern.  And yes, you
    guessed it, I'd use the Bible itself if I have to.

What does Job have to say about how the mind works?

Here is Marvin's argument taken to its logical conclusion:

Maybe the real problem is that we're discussing these questions in
English.  Maybe English embodies so many wrong assumptions about human
nature that we'll never be able to figure out how the mind works by
thinking, discussing, reading about it in English.  Let's all learn
Nootka or Hopi!

All seriousness aside, Marvin's point is well-taken.  Yet, at the same
time, I see no reason to throw the philosophical baby out with the
philosophical bathwater.  Anyone who uncritically accepts what some
philosopher says about mind might accept uncritically what anyone says
about it.  They'll lose, no matter how the variable bindings are
changed.

∂28-Jan-83  1427	MINSKY @ MIT-MC
Received: from MIT-MC by SU-AI with NCP/FTP; 28 Jan 83  14:27:23 PST
Date: Friday, 28 January 1983  17:14-EST
Sender: MINSKY @ MIT-OZ
From: MINSKY @ MIT-MC
To:   GAVAN @ MIT-OZ
Cc:   ISAACSON @ USC-ISI, PHIL-SCI @ MIT-MC
In-reply-to: The message of 28 Jan 1983  05:48-EST from GAVAN


GAVAN: If you were trained as a scientist in the United States, then
you HAVE encountered Peirce's work, although he probably wasn't cited.
Judging from what I've read of "Learning Meaning" so far, you have
picked up a lot from him, probably by osmosis.

OK, OK.  I don't have any profound point here.  I'm not even sure have
a good point.  Of course we all have intellectual predecessors.  I do
not personally know much about the work of Cardan who first solved the
quartic equation, but there is a debt.  I tend to regard "predecessor"
in the sense of direct descent, which for me is McCulloch (Who talked
often of Pierce) Rashevsky, Shannon, and a few others, and
contemporaries like McCarthy, Selfridge, Newell, and Solomonoff.  What
I'm saying is that there may remain tidbits of promising material in
the older ones that has not yet been exploited - but that I myself
feel that people working on language and meaning would go further by
understanding better what has happened since.  

Maybe I'm just being egocentric, since I think my recent work on
meaning is better than previous work in both philosophy and
psychology.  But that must stand on its own, and perhaps I should
withdraw from the argument bvecuase of "conflict of interest".

∂28-Jan-83  1453	MINSKY @ MIT-MC 	Summary 
Received: from MIT-MC by SU-AI with NCP/FTP; 28 Jan 83  14:53:00 PST
Date: Friday, 28 January 1983  17:40-EST
Sender: MINSKY @ MIT-OZ
From: MINSKY @ MIT-MC
To:   DAM @ MIT-OZ
Cc:   phil-sci @ MIT-OZ
Subject: Summary
In-reply-to: The message of 28 Jan 1983  11:21-EST from DAM


Your summary in [Date: Friday, 28 January 1983 11:21-EST] is right on
mark.  In fact I feel that the work on formal theories of language
since Chomsky 1954 or so was wonderful logical and mathematical
work but, if anything, set back progress on understanding how
language is used, learned, what its structure is, and how it relates
to other mental activities.  The idea of "competence vs. performance"
was a setback to progress, because it drew attention away from
the procedures of parsing, representing knowledge, etc. etc.

If you follow the line of language studies in AI, our work in that
direction goes back to students here like Evans, Bobrow, Winograd, and
recently attempts to bridge like Marcus and Berwick, of showing that
the procedural "innate" requisites for grammatical behavior are not so
special to language as was supposed by the non-computational
linguists.  

It seems to me that there probably is innate machinery in the brain to
connect semantical entities to "word-like" compact utterances.  Nouns
correspond to one sort of thing, like frames or K-lines.  Also, things
like Verbs connect to other word-like things that correspond to
Differences and, hence, actions or their effects.  The most
pragmatically useful things to "say" are indicators - single-word
utterances - and indicators of CHANGE.  (I believe that brains do have
genetic provisions for dealing with Differences.) THEREFORE (and I use
the term with some deliberation) one would expect that when cultures
invent languages they will soon invent two-word and three word
structures involving the two kinds of indicators.  Thus things like
sentences will spontaneously emerge because of common-sense
utilities, and need no biological a priori explanations.

You might say that this is a consequence of sparseness in intelligence
space.  My objection to the philosophers centered around Chomskian
linguistics is that they don't seem to see this simple cluster of
ideas.  

What is important is that they don't see how "universals" can emerge
spontaneuosly yet almost invevitably from circumstantial needs, and conclude
that just because something is universal, then it needs some specific
genetic mechanism.

I should think you would be entirely sympathetic with this.  If one has
the axioms of reasoning principles of arithmetic, one needs no extra
axiom to explain why everyone knows about 17.  We don't have to say that 17
is innate in any special way.

All I am saying is that I see a fine, healthy explanation of why
everyone will almost certainly have to invent "sentences" a priori.
There might be other ways, but they are more complicated.  All
children appear first to invent one and two-word expressions - but one
can regard this as a consequence of complexity in general, not of any
special innate law.  Then, contrary to you may have heard, children go
on to produce wide varieties of utterences and sentences slowly emerge
among them over a few years that match the local culture's norms.  The
examples of "highly intricate" innate universal linguistic
restrictions that you hear so much about are actually all very
suspicious and questionable.   So far as I can see, there is
very little evidence for "universal grammar" at all, beyond
those nouns and verbs I explain above.

All languages also have things that correspond to adjectives, also,
and larger "phrase structures" that amount to recursive-like
amplifications of thing-to-be-described.  This, too, in my view, needs
no special explanation because I should think that the first
hypothesis one would make is that it involves some ways to re-use the
speech machinery already available.  What would have to be innate is
some machinery for "chunking" sub-structures, as in MArcus - or, in my
kind theory, of plugging other frames into slots of already active
frames.

So, you see, my complaint about all those "universal" theories and
"innate grammar" ideas s not that there is anything wrong with them
but that they are artifact - they are explanations much more complex
than necessary - which stem from the dogmatic error of thinking that
language is a substance in its own right, apart from other mental
mechanisms already there.  This makes them have to assume that
there is needed than necessary in a theory that lets the languistic
processes ride on others.

In the end, of course, I am assuming MORE innate stuff.  I bet that
the brain has perhaps a million bits of innate structural stuff.  Only
I think the MIT linguistics movement went in a very wrong direction.

∂28-Jan-83  1551	KDF @ MIT-MC 	The Objectivity of Mathematics 
Received: from MIT-MC by SU-AI with NCP/FTP; 28 Jan 83  15:50:37 PST
Date: Friday, 28 January 1983  18:41-EST
Sender: KDF @ MIT-OZ
From: KDF @ MIT-MC
To:   GAVAN @ MIT-OZ
Cc:   DAM @ MIT-OZ, MINSKY @ MIT-OZ, phil-sci @ MIT-OZ
Subject: The Objectivity of Mathematics
In-reply-to: The message of 28 Jan 1983  05:10-EST from GAVAN

	From Gavan:
    Isn't it possible that, if there are innate logical mechanisms (or
    even alogical mechanisms), they could distort or perceptual equipment
    so radically that nothing we think or say could ever (except by
    chance) correspond to anything in "external reality" (assuming there
    is such a thing)?
    Please explain what you mean here.  Don't you think that what we
    denote to be an adequate,true, or useful result has something to do
    with how our inferential mechanisms work?  Perhaps I'm
    misunderstanding you.

It is useful to seperate discovering how such mechanisms work from how
accurate or useful they are.  If we have a way of judging the results
then we can decide whether or not a particular inference mechanism is
what we want (for whatever reason).  But the theory of how it operates
(such as "we reason about X by doing the following steps:") doesn't
depend on the way that we judge whether or not what the mechanism does
is right, coherent, or whatever.  Whether or not we think that mechanism
is part of a mind will however depend on such judgements.

∂28-Jan-83  1603	DAM @ MIT-MC 	Sentences  
Received: from MIT-MC by SU-AI with NCP/FTP; 28 Jan 83  16:03:32 PST
Date: Friday, 28 January 1983  18:55-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   MINSKY @ MIT-OZ
cc:   phil-sci @ MIT-OZ, BERWICK @ MIT-OZ
Subject: Sentences


	First let my to try to summarize your position as I understand
it (feel free to make corrections).  You are saying that you think
there is lots of innate machinery (a million or so bits) and that
language, meaning, and thought in general can be understood as the
interaction of this machinery with the environment and the needs of an
organism (or cognitive system).  You further feel that linguistic
"universals" as they have been described by linguists are derived
artifacts of this system rather than a direct reflection of an innate
structure.  Thus you feel that the study of "language" (i.e. the
non-computational linguistic definition of language) is misleading and
has hurt AI.

	Well I am sympathetic with the view that, while there is lots
of innate structure, language is not a DIRECT reflection of this
structure.  However I strongly disagree with your assesment of the
importance of Chomskian linguistics (and analogously with your
presumed assesment of the importance of the study of mathematics. i.e.
with logic).  Bob Berwick's thesis, which you mention, provides a
precise and from what I am told highly plausible theory of the innate
machinery which is involved in learning syntax.  But Chomsky was
Berwick's thesis supervisor and it is my impression that Berwick's
work is based on Chomskian universals.  Furthermore as far as I know
Berwick's theory does not relate directly to general cognition.
Berwick's statements about general cognition are in fact very similar
to what Chomsky has been saying all along "that to learn x one must
a-priori constrain the form of x".  This approach argues for a highly
constrained a-priori representation language.
	Modern approaches to logic (formal languages) have proceded
along lines similar to those of modern linguistics, the emphasis has
been on validity or truth as well as on computational structures or
deduction relations.  Mathematics seems to me to be completely
objective and I suspect that mathematical truth IS more or less a
direct reflection of an innate structure.  In any case I think the
study of mathematical truth must precede the study of how one computes
it, just as the study of language preceded a good theory of
computational language acquisition.  You may say that one need not be
concerned with computing mathematical truth but there are certain
domains, such as computer programming, where it seems that determining
the truth of mathematical statements is done all the time, even by
those with no mathemtical training.

	I would really like to here what Berwick has to say about all
this.

	David Mc

∂28-Jan-83  1614	DAM @ MIT-MC 	meaning    
Received: from MIT-MC by SU-AI with NCP/FTP; 28 Jan 83  16:14:09 PST
Date: Friday, 28 January 1983  19:04-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   Isaacson @ USC-ISI
cc:   phil-sci @ MIT-OZ
Subject: meaning


	Date: Thursday, 27 January 1983  23:16-EST
	From: ISAACSON at USC-ISI

	Now that you start talking about MEANING, and even TWO meanings,
	and TWO different PROOFS, etc.  you getting mighty close to
	Peirce's position!  In fact, it appears (to me, at least) that
	you're starting to discover the essence of pragmatics, as
	distinct from semantics, in the Peircean sense!  Congratulations!
	and Welcome to the club!

In talking about meaning I am staying STRICTLY within the framework
of Tarskian semantics.

	David Mc

∂28-Jan-83  1634	ISAACSON at USC-ISI 	Welcome to the club (?) 
Received: from MIT-MC by SU-AI with NCP/FTP; 28 Jan 83  16:34:20 PST
Date: 27 Jan 1983 2016-PST
Sender: ISAACSON at USC-ISI
Subject: Welcome to the club (?)
From: ISAACSON at USC-ISI
To: DAM at MIT-MC
Cc: phil-sci at MIT-MC, isaacson at USC-ISI
Message-ID: <[USC-ISI]27-Jan-83 20:16:29.ISAACSON>



In-Reply-To: Your message of Thursday, 27 Jan 1983, 17:54-EST


JDI: The question, though, is would someone like DAM bother to
worry about constructive methods to establish such premises in
the first place.


DAM: The answer to this question depends on what the statement
"all foos are gritchy" is taken to MEAN [my emphasis, JDI] (in a
precise way)...

...Thus the same sentence can mean two different things and a proof of one meaning is not necessarily a proof of the other meaning...


It is not clear if you're answering for yourself or for the
intuitionist position whose details you profess to not
understand.  At any rate I'll take it that you speak for
yourself.

Now that you start talking about MEANING, and even TWO meanings,
and TWO different PROOFS, etc.  you getting mighty close to
Peirce's position!  In fact, it appears (to me, at least) that
you're starting to discover the essence of pragmatics, as
distinct from semantics, in the Peircean sense!  Congratulations!
and Welcome to the club!


∂28-Jan-83  1920	MINSKY @ MIT-MC 	Sentences    
Received: from MIT-MC by SU-AI with NCP/FTP; 28 Jan 83  19:19:59 PST
Date: Friday, 28 January 1983  17:51-EST
Sender: MINSKY @ MIT-OZ
From: MINSKY @ MIT-MC
To:   DAM @ MIT-OZ
Cc:   phil-sci @ MIT-OZ
Subject: Sentences
In-reply-to: The message of 28 Jan 1983  11:32-EST from DAM



DAM:	So you think that, independent of innate structures, it is in the
nature of cultures (networks of cognitive agents) that they use sentences.
This seems to be a stronger claim about the ultimate importance of sentences
than the claim that they reflect innate structures.

I do claim this, as explained in previous message.  I consider it a more
plausible, reasonable, and clever explanation than waving hands to say that
it is "just innate".  Besides, it fits the evidence.

May I add that I have a feeling that you don't appreciate what I'll
call "the problem of modularity".  ( I may be wrong about that.)  The
problem is characteristic of people who think in terms of
axiomatization: You see it as relatively simple to add a new
assumption because you can isolate it as a new proposition.  But if
you think about how the mind might work - that is in terms of
performance rather than competence, you might find that it takes a lot
of machinery to implement an "innate" concept.  One needs the skills
to use it.  This is not apparent if one assumes that everything will
follow from some single, simple rule of inference.

The application of this is that when I suggest how grammar emerges
spontaneously from some cultural or cognitive pheneomenon, then I', not
"claiming" anything new, except that my reasoning is plausible.
But when you add "sentences are innate" you are adding more to the theory.
So I wonder what you mean by "my claim being stronger"?

∂28-Jan-83  1927	ISAACSON at USC-ISI 	Re:  meaning  
Received: from MIT-MC by SU-AI with NCP/FTP; 28 Jan 83  19:27:28 PST
Date: 28 Jan 1983 1913-PST
Sender: ISAACSON at USC-ISI
Subject: Re:  meaning
From: ISAACSON at USC-ISI
To: DAM at MIT-MC
Cc: phil-sci at MIT-MC, isaacson at USC-ISI
Message-ID: <[USC-ISI]28-Jan-83 19:13:58.ISAACSON>


In-Reply-To: Your message of Friday, 28 Jan 1983, 19:04-EST


DAM: In talking about meaning I am staying STRICTLY within the
framework of Tarskian semantics.


I respect your inherent right to self-determination.

Perhaps we better move on to some new topics.


-- JDI


∂28-Jan-83  2022	phil-sci-request at MIT-MC 	Archives On MIT-AI    
Received: from MIT-MC by SU-AI with NCP/FTP; 28 Jan 83  20:22:41 PST
Date: Friday, 28 January 1983, 20:24-EST
From: phil-sci-request at MIT-MC
Sender: JBA at MIT-OZ at MIT-MC
Subject: Archives On MIT-AI
To: phil-sci-net at MIT-OZ at MIT-MC

Due to popular demand, copies of the archives now exist on MIT-AI for
FTPing, reading, etc.

The archive is in: AI:COMMON;PHIL-S DIS-1
The inbox is in:   AI:COMMON;PHIL-S INBOX

Note that these files are in RMAIL format.
Have fun!

∂28-Jan-83  2340	John McCarthy <JMC@SU-AI> 	sentences    
Received: from MIT-MC by SU-AI with NCP/FTP; 28 Jan 83  23:40:21 PST
Date: 28 Jan 83  2333 PST
From: John McCarthy <JMC@SU-AI>
Subject: sentences
To:   phil-sci@MIT-OZ  

My grade school teacher told me that the virtue of sentences was that
a sentence expresses a complete thought.
A philosopher could undoubtedly have made her feel
foolish by showing that she didn't have a clear idea of what constitutes
a complete thought.  Nevertheless, the grade school teacher was right.
Complete sentences are less context dependent
than incomplete sentences.  Speaking or writing complete sentences
reduces the probability of presuming more context than the listener
or reader actually.
This is why careful speech and writing in all languages involves
complete sentences.

	I believe that children could, in an unethical
experiment, be brought up in a language that did not involve complete
sentences.  If they were then left alone, one of them might invent
the concept of complete sentences and persuade his fellows of their
advantages in avoiding misunderstanding.

	It seems to me that archeology tells us that the first written
languages did not have complete sentences and were therefore useful
only for limited communication - the dates of kings and battles and
the contents of warehouses.


∂29-Jan-83  0009	GAVAN @ MIT-MC 	meaning  
Received: from MIT-MC by SU-AI with NCP/FTP; 29 Jan 83  00:09:06 PST
Date: Saturday, 29 January 1983  03:02-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   DAM @ MIT-OZ
Cc:   Isaacson @ USC-ISI, phil-sci @ MIT-OZ
Subject: meaning
In-reply-to: The message of 28 Jan 1983  19:04-EST from DAM

    From: DAM

    In talking about meaning I am staying STRICTLY within the framework
    of Tarskian semantics.

Then are you really talking about meaning at all?  Perhaps I'm
misunderstanding Tarskian semantics.  Can you state in a nutshell, for
the non-cognoscenti, Tarski's main thesis?

∂29-Jan-83  0023	←Bob <Carter at RUTGERS> 	sentences
Received: from MIT-ML by SU-AI with NCP/FTP; 29 Jan 83  00:23:43 PST
Date: 29 January 1983  03:12-EST (Saturday)
Sender: CARTER at RU-GREEN
From: ←Bob <Carter at RUTGERS>
To:   John McCarthy <JMC at SU-AI>
Cc:   phil-sci at MIT-OZ
Subject: sentences

    Date: 28 Jan 83  2333 PST
    From: John McCarthy <JMC@SU-AI>
    Re:   sentences

    	It seems to me that archeology tells us thatb A technique for controlling sid not have complete sentences and were therefore useful
    only for limited communication - the dates of kings and battles and
    the contents of warehouses.

				Aaaargh!  

				What.
				Will.
				Future.
				Archeologists.
				Say about English.
				When 
				They.
				Read.
				Your.
				Grocery.
				List?

←Bob

∂29-Jan-83  0151	JCMa@MIT-OZ 	meta-epistemology, philosophy of science, innateness, and learning 
Received: from MIT-MC by SU-AI with NCP/FTP; 29 Jan 83  01:50:54 PST
Date: Saturday, 29 January 1983, 04:20-EST
From: JCMa@MIT-OZ
Subject: meta-epistemology, philosophy of science, innateness, and learning
To: phil-sci@mc
In-reply-to: The message of 26 Jan 83 14:56-EST from GAVAN at MIT-MC

The goal of the philosophy of science is to explicate the processes
whereby people, as scientists, can acquire new knowledge of their world.
When scientific inquiry is undertaken, the new knowledge is at first
unknown to the scientist.  Thus, the problem amounts to taking some
initial knowledge about the world, generating some likely hypotheses,
and convincingly assessing the hypotheses.  The generation of hypotheses
is an inherently creative act.  Testing of hypotheses may or may not
require generation of new hypotheses.  For simplicity we can assume that
hypothesis testing is a purely deductive process.  That leaves
hypothesis formation as the primary creative component in acquisition of
new knowledge.

The view of simple cybernetics was as follows:  Given some knowledge,
recombine it into some new syntheses, and selectively retain it.
Frequently this recombination was thought to be "blind."  However, it
seems clear that the universe of possible hypotheses is too large for
the blind approach to succeed.  What is needed is a method for
best-first generation of possible hypotheses, constraint.  Epistemogenic
processes (e.g., abduction, metaphor, and analogy) must be structured in
a way which satisfies this best-first [or best-almost-first] constraint.

The Kuhnian view in the philosophy of science sees anomaly as the
driving force in scientific revolutions.  Moreover, Kuhn would argue
that "normal science" tends to be "puzzle-solving" within dominant
paradigms, or mainly a deductive enterprise.  In my view, one can
summarize the task of philosophy of science as finding out how
epistemogenic processes work in scientific revolutions.

Such an inquiry is really an inquiry in meta-epistemology because it
seeks to explain how it can be that an epistemological process, science,
can successfully inquire into "truth."  In other words, the philosophy
of science seeks to elucidate the processes and conditions that make
epistemic activity both possible and effective.  Thus, a good
meta-epistemology must propose the set of necessary processes and
conditions for "truth" to be uncovered.

If I were to argue about what is innate in human cognition, I would take
the position that it is not some particular language structures, or even
some epistemological processes.  Rather, I would argue that it is these
meta-epistemological structures which are innate.  Of course, when one
contemplates the possibility that the meta-epistemology maybe able to
change itself, one must claim that it is really a meta-epistemology
which can recursively define itself, a third-order epistemology.  

If we take learning, be it collective or individual, to be the
aquisition of new knowledge through selective hypothesis formation and
selective retention, it becomes clear that learning is intimately
related to third order epistemology.  Given that both individuals and
groups evince epistemological behavior, it would seem reasonable that
psychology and the philosophy of science should both be able inform each
other regarding these processes.  My guess is that the systemic
structures at both levels, the collective and the individual, bear
strong family resemblances to each other.  Comments?

∂29-Jan-83  0806	DAM @ MIT-MC 	Tarskian Semantics   
Received: from MIT-MC by SU-AI with NCP/FTP; 29 Jan 83  08:05:59 PST
Date: Saturday, 29 January 1983  11:01-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   phil-sci @ MIT-OZ
Subject: Tarskian Semantics


	I would like to give a highly personalized brief account of
Tarskian semantics.  Most (probably all) modern mathematics is done in
an informal but precise way.  Mathematicians give ENGLISH defintions
of structures such as the integers and prove ENGLISH statements about
them.  These statements have precise but informal "meaning".  I do not
claim to know the precise nature of that meaning since I do not claim
to have precise and accurate metamathematics.
	A formal language is a set of well formed formul`βL@QGQ¬eCGi∃d~∃gQeS]OLR\@AMkGPAMieS]≥fAgQ=kYHA	JAiC-K\ACLAUkgPAiQCPXA[K¬]S]O1Kgf~)GQCe¬GiKd↓gieS9OfXAU]iSY0ABA!Iπ∪'∀A¬+(↓∪≥
∨I≠β_A5KC]S9NAQCLAEKK8~∃CgMSO]K⊂Ai↑AQQKZ\A)CeMWSC\↓gK[C9iSGf↓ae←m%IKfA∧AoCr↓←LAiIC]gY¬iS]N4∃iQKMJAGQ¬eCGi∃dAgiIS]Of↓S]i↑↓aeKG%gJAEUhAS]→←e[C0AK]O1SgPA5CiQK5CiSG¬X~∃gQCiK[∃]if\A)QJ↓IKiC%YfA←_A)CeMWSC\↓gK[C9iSGf↓←EgGUeJAi!SfAo¬rA←L4∃Y←←ing at it.  In giving the details of a Tarskian semantics one
precisely defines (in english) a function which takes a "model" and a
well formed formula and gives either "true" or "false". Any well
formed formula can then be translated to an english statement about a
"model".
	If one believes in an innate language of mathematics then
Tarskian semantics provides a specific translation from a formal
external language to the mysterious and unknown innate mathematical
language of mind.

	Tarskian semantics is also the proper conceptual foundation
for the corrosponce theory of truth.  One interpretation of the
corrospondence theory is that statements are always "about
something".  In the case of Tarskian semantics formal statements are
"about" the precisely defined "models".  The models can be any
precisely definable mathematical structure.  There certainly need be
no one to one correspondence between well formed formulas and objects
in the model.  Also it is well known that for most formal languages
there are indestinguishible models, two models such that every formal
sentence has the same truth value on both.  Thus while one can think
of sentences as being about something one can not determine through
the truths of sentences exactly what that thing is.  These latter
properties of Tarskian semantics are solid mathematical results and
while many people do not understand them they must taken as
non-controversial.

	One last comment before this message gets too long.  If one
thinks of Tarskian semantics as providing a translation between a
formal language and a mysterious language of thought then one can ask
whether this translation is onto, i.e. does every precise statement a
mathematician can make informally have a corresponding sentence in the
formal language.  It is absolutely clear that the first order
predicate calculus does not have an onto translation, there are
naturally occuring mathematical statements which do not corrospond to
any set (including infinite sets) of first order sentences.

	David Mc

∂29-Jan-83  0809	GAVAN @ MIT-MC 
Received: from MIT-MC by SU-AI with NCP/FTP; 29 Jan 83  08:09:20 PST
Date: Saturday, 29 January 1983  11:00-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   MINSKY @ MIT-OZ
Cc:   ISAACSON @ USC-ISI, PHIL-SCI @ MIT-MC
In-reply-to: The message of 28 Jan 1983  17:14-EST from MINSKY

    From: MINSKY

    What I'm saying is that there may remain tidbits of promising
    material in the older ones that has not yet been exploited - but that
    I myself feel that people working on language and meaning would go
    further by understanding better what has happened since.

TIDBITS?!?!  Is that all?  Aristotle, Spinoza, DesCartes, Kant, Hegel,
Locke, Hume, Berkeley, Peirce, Husserl, Heidegger, etc. have given us
only TIDBITS in comparison to wonders like Solomonoff, Minsky, and
McCarthy?  Come on.

    Maybe I'm just being egocentric, since I think my recent work on
    meaning is better than previous work in both philosophy and
    psychology.  But that must stand on its own, and perhaps I should
    withdraw from the argument bvecuase of "conflict of interest".

Well, maybe you ARE being egocentric, but it probably takes a large
ego to make any progress whatever in this field.  An abundance of
self-confidence is required to even attempt the subject.  Anyway, from
what I know about the philosophers I listed above, they were all
egomaniacs of one variety or another.

I hope you don't withdraw from the discussion, because there is much
in "Learning Meaning" that is provocative and it could serve as a
focal point for a discussion that gets beyond semantic confusion.  I
would agree that it's better than much (do you really think all?)
previous work in both philosophy and psychology (especially
psychology), but I'm not sure to whom precisely you're comparing
yourself.  Perhaps a comparison of the ideas in "Learning Meaning"
with those of Peirce would be appropriate.  Maybe JDI could summarize
what Peirce means by firstness, secondness, and thirdness, and then
Marvin can explain how he treats these phenomena (nuomena?).

∂29-Jan-83  0835	GAVAN @ MIT-MC 	The Objectivity of Mathematics    
Received: from MIT-MC by SU-AI with NCP/FTP; 29 Jan 83  08:35:39 PST
Date: Saturday, 29 January 1983  11:23-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   KDF @ MIT-OZ
Cc:   DAM @ MIT-OZ, MINSKY @ MIT-OZ, phil-sci @ MIT-OZ
Subject: The Objectivity of Mathematics
In-reply-to: The message of 28 Jan 1983  18:41-EST from KDF

    From: KDF

    It is useful to seperate discovering how such mechanisms work from how
    accurate or useful they are.  If we have a way of judging the results
    then we can decide whether or not a particular inference mechanism is
    what we want (for whatever reason).  But the theory of how it operates
    (such as "we reason about X by doing the following steps:") doesn't
    depend on the way that we judge whether or not what the mechanism does
    is right, coherent, or whatever.  Whether or not we think that mechanism
    is part of a mind will however depend on such judgements.

I suppose that at least a major part of theorizing is an act of
judgment.  It seems to me that a theory can be characterized (in some
sense) as a trace of judgments.  What happens when we try to develop a
theory of (make a judgment about) how we theorize or make judgments?
Where do we stand when we make this judgment?  

Why do you say that the theory of how it [an innate logical mechanism]
operates doesn't depend on the way that we judge?  Don't all our
theories depend (to some extent at least) on the way that we judge?
How can we even answer this question without a theory of how to judge?
Hmmmmm.....

∂29-Jan-83  0845	GAVAN @ MIT-MC 	meaning  
Received: from MIT-MC by SU-AI with NCP/FTP; 29 Jan 83  08:45:27 PST
Date: Saturday, 29 January 1983  11:41-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   DAM @ MIT-OZ
Cc:   Isaacson @ USC-ISI, phil-sci @ MIT-OZ
Subject: meaning
In-reply-to: The message of 28 Jan 1983  19:04-EST from DAM

    From: DAM

    In talking about meaning I am staying STRICTLY within the framework
    of Tarskian semantics.

OK.  I just checked it out with BAK and he tells me that, indeed, Tarskian
semantics refers to the correspondence-theory-of-truth-related theory that
holds that: "Foo is a bar" is true iff foo is a bar.

What does this have to do with meaning?  Or is this just the framework
within which you want to constrain any discussion of semantic issues?
If the latter, what does your construal of meaning have to do with
language?

∂29-Jan-83  0935	MINSKY @ MIT-MC
Received: from MIT-MC by SU-AI with NCP/FTP; 29 Jan 83  09:35:07 PST
Date: Saturday, 29 January 1983  12:26-EST
Sender: MINSKY @ MIT-OZ
From: MINSKY @ MIT-MC
To:   GAVAN @ MIT-OZ
Cc:   ISAACSON @ USC-ISI, PHIL-SCI @ MIT-MC
In-reply-to: The message of 29 Jan 1983  11:00-EST from GAVAN


TIDBITS?!?!  Is that all?  Aristotle, Spinoza, DesCartes, Kant, Hegel,
Locke, Hume, Berkeley, Peirce, Husserl, Heidegger, etc. have given us
only TIDBITS in comparison to wonders like Solomonoff, Minsky, and
McCarthy?  Come on.

    The question is in the word "remain".  I'm not saying they didn't
have a big influence, for better or for worse.  The burden is on you to
produce extract the tidbits or better, iof you prefer to work that way.

∂29-Jan-83  1053	MINSKY @ MIT-MC 	meta-epistemology, philosophy of science, innateness, and learning  
Received: from MIT-MC by SU-AI with NCP/FTP; 29 Jan 83  10:53:32 PST
Date: Saturday, 29 January 1983  12:37-EST
Sender: MINSKY @ MIT-OZ
From: MINSKY @ MIT-MC
To:   JCMa @ MIT-OZ
Cc:   phil-sci @ mc
Subject: meta-epistemology, philosophy of science, innateness, and learning
In-reply-to: The message of 29 Jan 1983 04:20-EST from JCMa


JCMA: If I were to argue about what is innate in human cognition, I
would take the position that it is not some particular language
structures, or even some epistemological processes.  Rather, I would
argue that it is these meta-epistemological structures which are
innate.

Well, I'm glad about the subjunctive.  What I'm having difficulty with
is what you guys mean by "innate".  Because it seems to pretend there
is no child development.

How about a round of what you mean by that?  If you start with one
mechanism, e.g., words, and then (as JMc) points out, it is more or
less almost certain to lead (by hill-climbing, say) to sentences in
order to express thoughts certain thoughts, is that sentential
mechanism to be innate?  If so, then, what value has the word innate
as distinguished from other things that happen to processes?

∂29-Jan-83  1139	GAVAN @ MIT-MC 	meta-epistemology, philosophy of science, innateness, and learning   
Received: from MIT-MC by SU-AI with NCP/FTP; 29 Jan 83  11:39:35 PST
Date: Saturday, 29 January 1983  14:03-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   MINSKY @ MIT-OZ
Cc:   JCMa @ MIT-OZ, phil-sci @ mc
Subject: meta-epistemology, philosophy of science, innateness, and learning
In-reply-to: The message of 29 Jan 1983  12:37-EST from MINSKY

    From: MINSKY

    What I'm having difficulty with is what you guys mean by "innate".
    Because it seems to pretend there is no child development.

    How about a round of what you mean by that?  

Sometimes "innate" is used to denote peculiarities about individuals.
My eyes are hazel.  This is "innate" in a sense, since the hazelness
of my eyes is part of my code.  By "innate" I don't mean anything at
all like this.

When I use the term "innate" I refer to those concepts and abilities
that must pre-exist in the child at birth if it is to develop.  Innate
mental abilities, then, would be the necessary conditions of
knowledge.  I agree with Kant on this one.  In order to have any
knowledge of anything in the world, I must first possess the ability
to detect differences in space and time.  So the concepts of space and
time are pure and a priori.  Without them emiprical learning would not
be possible.  Neither would mathematics.

Sentences certainly are not innate, although something remotely
analogous to sentences might be. 

∂29-Jan-83  1156	ISAACSON at USC-ISI 	The Meta-Epistemogen:  Difference Detection 
Received: from MIT-MC by SU-AI with NCP/FTP; 29 Jan 83  11:56:47 PST
Date: 29 Jan 1983 1144-PST
Sender: ISAACSON at USC-ISI
Subject: The Meta-Epistemogen:  Difference Detection
From: ISAACSON at USC-ISI
To: GAVAN at MIT-MC
Cc: MINSKY at MIT-MC, JCMa at MIT-MC, phil-sci at MIT-MC, isaacson at USC-ISI
Message-ID: <[USC-ISI]29-Jan-83 11:44:32.ISAACSON>


In-Reply-To: Your message of Saturday, 29 Jan 1983, 14:03-EST


GAVAN: I agree with Kant on this one.  In order to have any
knowledge of anything in the world, I must first possess the
ability to detect differences in space and time.


JDI: DETECT DIFFERENCES IN SPACE AND TIME (period)


In my opinion, the whole story does begin that way, with or
without Kant.

Reference: Genesis, Chap.  1, Verse 1. [I told you we'll get
there]


∂29-Jan-83  1212	DAM @ MIT-MC 	Sentences  
Received: from MIT-MC by SU-AI with NCP/FTP; 29 Jan 83  12:12:41 PST
Date: Saturday, 29 January 1983  15:07-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   MINSKY @ MIT-OZ
cc:   phil-sci @ MIT-OZ
Subject: Sentences


	Date: Saturday, 29 January 1983  12:37-EST
	From: MINSKY

	(paraphrased)

	If you start with words then (as JMC points out) they more or
	less almost certainly lead (by hill-climbing, say) to sentences.

	I do not think McCarthy said any such thing.  The notion that
words lead inevitably to sentences seems to me to be pure conjecture
on your part.  Behaviorists thought language was "in the cards" in
behaviorism.  Do have any argument for the inevitibility of sentences
which is stronger than a hand waving behaviorist style "in the
cards" argument?  (Do you have a computer simulation which generates syntax
or a definition of a precise computational mechanism and a concrete
argument for why it would generate syntax?)
	McCarthy pointed out that a sentence is defined by school teachers
as "a complete thought".  To me this indicates a close relationship between
syntax and semantics, e.g. that a sentence is a string of words with an
algorithmically derivable meaning.  Of course human ingenuity can be used
to get meanings from incomplete sentences.

	David Mc

∂29-Jan-83  1216	DAM @ MIT-MC 	Definitions of "innate"   
Received: from MIT-MC by SU-AI with NCP/FTP; 29 Jan 83  12:16:06 PST
Date: Saturday, 29 January 1983  14:36-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   MINSKY @ MIT-OZ
cc:   phil-sci @ MIT-OZ
Subject: Definitions of "innate"


	Date: Saturday, 29 January 1983  12:37-EST
	From: MINSKY

	What I'm having difficulty with is what you guys mean by
	"innate".  Because it seems to pretend there is no child development.

	How about a round of what you mean by (innate)?  If you start with one
	mechanism, e.g., words, and then (as JMc) points out, it is more or
	less almost certain to lead (by hill-climbing, say) to sentences in
	order to express thoughts certain thoughts, is that sentential
	mechanism to be innate?  If so, then, what value has the word innate
	as distinguished from other things that happen to processes?

	I call an aspect of behavior innate in an organism if that
aspect is exhibited by that organism independent of the developmental
environment.  Of course I am only interested in plausible
developmental environments, ones in which there is light to see and a
long established community of people to interact with.  However there
is still lots of environmental variance.  English is not innate.  The
theory of general relativity is not innate.  The knowledge that water
can freeze is not innate.
	This notion of innateness BY NO MEANS pretends there is no
development (are you serious?).  While this defintion of innateness
does not imply that innate behaviour corrosponds to specific sequences
of DNA it seems plausible to me that DNA specific to a behavior
would come into existence once the behavior became universal.  Even if
innate behaviour is derived from DNA in a complex developmental way
IDENTIFYING innate behavior is (I think) important for making theories
of what the DNA specifies directly.

	David Mc

∂29-Jan-83  1222	DAM @ MIT-MC 	Sentences  
Received: from MIT-MC by SU-AI with NCP/FTP; 29 Jan 83  12:18:50 PST
Date: Saturday, 29 January 1983  15:12-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   MINSKY @ MIT-OZ
cc:   phil-sci @ MIT-OZ
Subject: Sentences


	I just figured out why you interpreted McCarthy as saying that
words lead to sentences.  He said that the first written languages had
no sentences.  This of course does not imply that the spoken languages
of those times did not have sentences.  There are lots of current human
non-written languages and they all use sentences.  Early writting
undoubtedly involved much smaller written than spoken vocabularies
(there may not have been any way of writing verbs).

	David Mc

∂29-Jan-83  1225	JCMa@MIT-OZ at MIT-MC 	meta-epistemology, philosophy of science, innateness, and learning 
Received: from MIT-MC by SU-AI with NCP/FTP; 29 Jan 83  12:25:06 PST
Date: Saturday, 29 January 1983, 15:18-EST
From: JCMa@MIT-OZ at MIT-MC
Subject: meta-epistemology, philosophy of science, innateness, and learning
To: MINSKY@MIT-MC
Cc: sitting-ducks@MIT-OZ at MIT-MC
In-reply-to: The message of 29 Jan 83 12:37-EST from MINSKY at MIT-MC

    from: Minsky
    In-reply-to: The message of 29 Jan 1983 04:20-EST from JCMa

    JCMA: If I were to argue about what is innate in human cognition, I
    would take the position that it is not some particular language
    structures, or even some epistemological processes.  Rather, I would
    argue that it is these meta-epistemological structures which are
    innate.

    Well, I'm glad about the subjunctive.  What I'm having difficulty with
    is what you guys mean by "innate".  Because it seems to pretend there
    is no child development.  

No, there certainly is child development, and it most certainly has much
to say about what is and is not innate.  The Chomsky/Piaget debates in
the mid-1970's tend to point this up, loudly.  When we speak of higher
order epistemological processes, what we mean is actually very
fundamental: Those processes which I, or gavan, would refer to as being
innate are ones which must be presupposed by developmental psychology,
e.g.  basic difference detection, basic space-time perception. How much
development a child is going to do without these basic ingredients?

    If you start with one mechanism, e.g., words, and then (as JMc)
    points out, it is more or less almost certain to lead (by
    hill-climbing, say) to sentences in order to express thoughts
    certain thoughts, is that sentential mechanism to be innate?

What about putting together the morphemes ... into words before we rush
off onto sentences.  How did we get to words so quickly?  If one grovels
down at the lower levels, there can be no question that certain
perceptual capacities [e.g. the senses] must be grounded in "close-to"
algorithmic hardware.  [You just can't afford to spend too much time
worrying about the composition of particular photons when surrveying a
visual scene.] Beyond this sort of low level constraints, learning is
required.  But it's not just any sort of learning; It's learning about
learning [deutero-learning, or self-reflective learning].  That is the
point of the meta-espistemology issue:  identifying the simplest core of
such a system, which is still capable of unfolding into successively
more powerful versions with every cycle [and which is nevertheless
complete].

    If so, then, what value has the word innate as distinguished from
    other things that happen to processes?

You are correct to indicate that innateness is meaningless when thinks
in terms of processes, as there is really no detectable beginning.  The
difficulty lies in that there is then no shut-off, and evolution becomes
the entire explanation.  My use of the term is simply conventional,
indicating essentially, before my analysis begins.  Self-organization of
molecules and genetics are not my strong point -- but I wouldn't mind
hearing more about it.

Anyway the main point of my previous message was to call more attention to
the hypothesis formation question, which seems rather underdeveloped
in the conventional wisdom.

∂29-Jan-83  1232	ISAACSON at USC-ISI 	What is a chair?   
Received: from MIT-MC by SU-AI with NCP/FTP; 29 Jan 83  12:32:46 PST
Date: 29 Jan 1983 1219-PST
Sender: ISAACSON at USC-ISI
Subject: What is a chair?
From: ISAACSON at USC-ISI
To: GAVAN at MIT-MC, MINSKY at MIT-MC
Cc: phil-sci at MIT-MC, isaacson at USC-ISI
Message-ID: <[USC-ISI]29-Jan-83 12:19:49.ISAACSON>


In-Reply-To: Gavan's message of Saturday, 29 Jan 1983, 11:00-EST
and Minsky's message of same date, 12:26-EST


GAVAN: Perhaps a comparison of the ideas in "Learning Meaning"
with those of Peirce would be appropriate.  Maybe JDI could
summarize what Peirce means by firstness, secondness, and
thirdness, and then Marvin can explain how he treats these
phenomena.


MINSKY: The burden is on you to produce extract [of] the tidbits,
or better, if you prefer, to work that way.



It is not clear to me whether Minsky really wishes to dilute his
hard-earned conceptions by endless comparisons with Peirce.  And
I can't really blame him: it may, indeed, become diversionary.

So I will not go into firstness, secondness, and thirdness, the
triadic constituents of the phaneron [which, I do think, may very
well have their parallels in "Learning Meaning"].  But let's try
something simple.


In "Learning Meaning", Chap.  4 on "Meaning", after a reasoned
discussion of the meaning of an object such as "chair", Minsky
says:


MINSKY: Reflection shows that "real things" are only rarely
things.  At first one might suppose that "chair" is just "a thing
with seat and legs and back".  But once one tries to frame a
definition that works for all various chairs we recognize,
there's little left in common except "something one can sit
upon".  In the end we find that "a chair" is nearly just as
mentalistic as "a wish"; again, we only find the unity we seek in
*purpose* or *intended use*; [and he concludes]:


                  a chair is, in its ESSENCE, what we USE it for.
[emphasis mine, JDI]


Now take one of Peirce's formulations of his so-called "Pragmatic
Maxim".

PEIRCE: Consider what effects, that might conceivably have
practical bearings, we conceive the object of our conception to
have.  Then, our conception of these effects is the whole of our
conception of the object.


To my mind, these points of view are awfully close.  I wonder if
Minsky recognizes that.  If not, can you point out the substance
of the difference, if any.

∂29-Jan-83  1243	GAVAN @ MIT-MC 	CREATION, AUTOPOEISIS [smash epistemogens]:  Difference Detection    
Received: from MIT-MC by SU-AI with NCP/FTP; 29 Jan 83  12:43:25 PST
Date: Saturday, 29 January 1983  15:35-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   ISAACSON @ USC-ISI
Cc:   JCMa @ MIT-OZ, MINSKY @ MIT-OZ, full-sigh @ MIT-OZ
Subject: CREATION, AUTOPOEISIS [smash epistemogens]:  Difference Detection
In-reply-to: The message of 29 Jan 1983  14:44-EST from ISAACSON at USC-ISI

    From: ISAACSON at USC-ISI

    GAVAN: I agree with Kant on this one.  In order to have any
    knowledge of anything in the world, I must first possess the
    ability to detect differences in space and time.

    JDI: DETECT DIFFERENCES IN SPACE AND TIME (period)

    In my opinion, the whole story does begin that way, with or
    without Kant.

    Reference: Genesis, Chap.  1, Verse 1. [I told you we'll get
    there]

"LET THERE BE ELECTROMAGNETIC RADIATION!"  [take that, Tarski!]

∂29-Jan-83  1249	JCMa@MIT-OZ at MIT-MC 	POESIS: The Meta-Epistemogen:  Difference Detection 
Received: from MIT-MC by SU-AI with NCP/FTP; 29 Jan 83  12:49:34 PST
Date: Saturday, 29 January 1983, 15:44-EST
From: JCMa@MIT-OZ at MIT-MC
Subject: POESIS: The Meta-Epistemogen:  Difference Detection
To: ISAACSON@USC-ISI
Cc: phil-sci@MIT-OZ at MIT-MC
In-reply-to: <[USC-ISI]29-Jan-83 11:44:32.ISAACSON>

    From: ISAACSON at USC-ISI
    Message-ID: <[USC-ISI]29-Jan-83 11:44:32.ISAACSON>
    In-Reply-To: Your message of Saturday, 29 Jan 1983, 14:03-EST

    GAVAN: I agree with Kant on this one.  In order to have any
    knowledge of anything in the world, I must first possess the
    ability to detect differences in space and time.

    JDI: DETECT DIFFERENCES IN SPACE AND TIME (period)

    In my opinion, the whole story does begin that way, with or
    without Kant.

    Reference: Genesis, Chap.  1, Verse 1. [I told you we'll get
    there]
Yup,

But, I am getting tired of epistemogen, why don't you use the term
poetic?  (as in auto- or allo-poetic) It means creative, already has
some currency, and conveys the idea of generation of a new categories.
Furthermore, it jives with some of the ideas attributed to Winograd in
the Le Monde article!  Comments?


p.s.  One minor detail:  What fields do you want to detect differences
across in mental space?

∂29-Jan-83  1330	ISAACSON at USC-ISI 	Epistemogen ===>   Poesis    
Received: from MIT-MC by SU-AI with NCP/FTP; 29 Jan 83  13:30:48 PST
Date: 29 Jan 1983 1310-PST
Sender: ISAACSON at USC-ISI
Subject: Epistemogen ===>   Poesis
From: ISAACSON at USC-ISI
To: JCMa at MIT-MC, GAVAN at MIT-MC
Cc: sitting-ducks at MIT-MC, isaacson at USC-ISI
Message-ID: <[USC-ISI]29-Jan-83 13:10:03.ISAACSON>
Redistributed-To: phil-sci at MIT-MC
Redistributed-By: ISAACSON at USC-ISI
Redistributed-Date: 29 Jan 1983


A compound word that includes POESIS sound promising to me.  But
if we do that, I'll have to send a message containing about two
pages from Walt Whitman's "Democratic Vistas".  Should I do that?


∂29-Jan-83  1342	GAVAN @ MIT-MC 	The Meaninglessness of Tarskian Semantics   
Received: from MIT-MC by SU-AI with NCP/FTP; 29 Jan 83  13:42:31 PST
Date: Saturday, 29 January 1983  16:30-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   DAM @ MIT-OZ
Cc:   phil-sci @ MIT-OZ
Subject: The Meaninglessness of Tarskian Semantics
In-reply-to: The message of 29 Jan 1983  11:01-EST from DAM

    From: DAM

    	Tarskian semantics is also the proper conceptual foundation
    for the corrosponce theory of truth.  

Then I guess we don't need it.

    One interpretation of the
    corrospondence theory is that statements are always "about
    something".  In the case of Tarskian semantics formal statements are
    "about" the precisely defined "models".  

Of course, models don't model anything but other models.  There are
only models.

    The models can be any
    precisely definable mathematical structure.  There certainly need be
    no one to one correspondence between well formed formulas and objects
    in the model.  Also it is well known that for most formal languages
    there are indestinguishible models, two models such that every formal
    sentence has the same truth value on both.  Thus while one can think
    of sentences as being about something one can not determine through
    the truths of sentences exactly what that thing is.  These latter
    properties of Tarskian semantics are solid mathematical results and
    while many people do not understand them they must taken as
    non-controversial.

They PERHAPS must be taken as non-controversial within the limited
range of mathematics.  Outside mathematics Tarskian semantics seems
devoid of meaning.  Since you claim it conceptually founds a theory of
truth, the correspondence theory, you are seeking to apply this theory
outside mathematics.  I don't think you can.  And I don't think you
can set yourself up as the arbiter of what's controversial and what's
non-controversial outside mathematics on the ground of your expertise
within mathematics.  In fact, I see this last view as quite dangerous.


∂29-Jan-83  1411	GAVAN @ MIT-MC 	What is a chair?   
Received: from MIT-MC by SU-AI with NCP/FTP; 29 Jan 83  14:11:14 PST
Date: Saturday, 29 January 1983  16:56-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   ISAACSON @ USC-ISI
Cc:   MINSKY @ MIT-OZ, phil-sci @ MIT-MC
Subject: What is a chair?
In-reply-to: The message of 29 Jan 1983  15:19-EST from ISAACSON at USC-ISI

    From: ISAACSON at USC-ISI

    In "Learning Meaning", Chap.  4 on "Meaning", after a reasoned
    discussion of the meaning of an object such as "chair", Minsky
    says:

    MINSKY: Reflection shows that "real things" are only rarely
    things.  At first one might suppose that "chair" is just "a thing
    with seat and legs and back".  But once one tries to frame a
    definition that works for all various chairs we recognize,
    there's little left in common except "something one can sit
    upon".  In the end we find that "a chair" is nearly just as
    mentalistic as "a wish"; again, we only find the unity we seek in
    *purpose* or *intended use*; [and he concludes]:

                      a chair is, in its ESSENCE, what we USE it for.
    [emphasis mine, JDI]

    Now take one of Peirce's formulations of his so-called "Pragmatic
    Maxim".

    PEIRCE: Consider what effects, that might conceivably have
    practical bearings, we conceive the object of our conception to
    have.  Then, our conception of these effects is the whole of our
    conception of the object.

    To my mind, these points of view are awfully close.  I wonder if
    Minsky recognizes that.  If not, can you point out the substance
    of the difference, if any.

Well, I see a difference here.  It seems to me that Peirce is
advocating the view that essence is something more than merely the
service it can provide US, in pursuit of OUR purposes, but final cause
in general.  While I think that something more should be said for
material cause and efficient cause, I think that the tendency of naive
pragmatism to equate the essence of things solely to our purposes is
problematic.  This philosophy, more legitimately called
instrumentalism than pragmatism, is the natural one from the vantage
point of the technician.  What would Thoreau have thought?  Peirce
made his thoughts clear in his critiques of James, Dewey, and Royce.

A chair is an artifact -- an instrument.  What is a tree?  A mere
wood-provider for chairs, or something more?

∂29-Jan-83  1422	GAVAN @ MIT-MC 	Sentences
Received: from MIT-MC by SU-AI with NCP/FTP; 29 Jan 83  14:22:15 PST
Date: Saturday, 29 January 1983  16:13-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   DAM @ MIT-OZ
Cc:   MINSKY @ MIT-OZ, phil-sci @ MIT-OZ
Subject: Sentences
In-reply-to: The message of 29 Jan 1983  15:07-EST from DAM

    From: DAM

    The notion that words lead inevitably to sentences seems to me to be pure 
    conjecture on your [Minsky's] part.  

What was your evidence for the innateness of sentences again?  Was it
their apparent universality?  Is this any less conjectural?  Maybe the
phenomenon of language is cultural rather than individual.  I'm told I
learned the word "ball" first.  Then I learned how to say "I want the
ball."  My parents taught me.  It wasn't innate.

    Behaviorists thought language was "in the cards" in
    behaviorism.  Do have any argument for the inevitibility of sentences
    which is stronger than a hand waving behaviorist style "in the
    cards" argument? 

Language is social.  Linguistic rules are a cover for political rule.
The Creoles had no past tense because their masters didn't want them
to know the history of their oppression.  There is no stronger
mechanism of social control than the sentence.

    (Do you have a computer simulation which generates syntax
    or a definition of a precise computational mechanism and a concrete
    argument for why it would generate syntax?)
	
If he did, why would this convince you?

    	McCarthy pointed out that a sentence is defined by school teachers
    as "a complete thought".  To me this indicates a close relationship between
    syntax and semantics, e.g. that a sentence is a string of words with an
    algorithmically derivable meaning. 

How many meaning-deriving algorithms do you think there are for any
one string of words?  JUST WHAT DO YOU THINK MEANING IS?  To me,
meaning obviously has nothing to do with Tarskian semantics for the
same reason that the correspondence theory is incoherent.  What do
mean, meaning?  You claim to be a realist and yet when we discuss
truth and meaning all I hear from you is talk about mathematics.  You
can't get more abstract than that.  And you can't get more removed
from reality, objective or otherwise.

What is meaning, really?  What is the smallest unit of meaning?   Is it
measurable?  How does it differ from reference? 

∂29-Jan-83  1528	←Bob <Carter at RUTGERS> 	Sentences
Received: from MIT-MC by SU-AI with NCP/FTP; 29 Jan 83  15:28:43 PST
Return-Path: <@MIT-MULTICS.ARPA,@RUTGERS:CARTER@RU-GREEN>
Received: from RUTGERS by MIT-MULTICS.ARPA TCP; 29-Jan-1983 18:07:38-est
Date: 29 January 1983  18:04-EST (Saturday)
Sender: CARTER at RU-GREEN
From: ←Bob <Carter at RUTGERS>
To:   GAVAN at MIT-MC
Cc:   DAM at MIT-OZ, MINSKY at MIT-OZ, phil-sci at MIT-OZ
Subject: Sentences


    From: GAVAN @ MIT-MC
    To:   DAM @ MIT-OZ
    cc:   MINSKY @ MIT-OZ, phil-sci @ MIT-OZ
    Re:   Sentences

    Linguistic rules are a cover for political rule.

This is a remarkable statement.

    The Creoles had no past tense because their masters didn't want them
    to know the history of their oppression.

Do you have a citation for that extraordinary assertion of fact?  Or
were you just trying to keep your audience awake?

←Bob

∂29-Jan-83  1604	MINSKY @ MIT-MC 	meta-epistemology, philosophy of science, innateness, and learning  
Received: from MIT-MC by SU-AI with NCP/FTP; 29 Jan 83  16:04:15 PST
Date: Saturday, 29 January 1983  18:59-EST
Sender: MINSKY @ MIT-OZ
From: MINSKY @ MIT-MC
To:   GAVAN @ MIT-OZ
Cc:   JCMa @ MIT-OZ, phil-sci @ mc
Subject: meta-epistemology, philosophy of science, innateness, and learning
In-reply-to: The message of 29 Jan 1983  14:03-EST from GAVAN


GAVAN: When I use the term "innate" I refer to those concepts and
     abilities that must pre-exist in the child at birth if it is to
     develop.  Innate mental abilities, then, would be the necessary
     conditions of knowledge.  I agree with Kant on this one.  In
     order to have any knowledge of anything in the world, I must first
     possess the ability to detect differences in space and time.  So
     the concepts of space and time are pure and a priori.

Well, I would say that the idea of "pre-exist" is problematical.  What
would seem necessary is that there must pre-exist machinery that would
permit learning those concepts.  The original machinery need not
have anything like those "concepts" to begin with.

I wonder how Kant would have reacted to Piaget's discoveries about
how little the infant knows about space and time at the start.

I do believe that such machine would need to begin with ways to
deal with differences.  I do not see that it is NECESSARY, though,
to begin with an "innate" sense of time-difference.  The reason is that
the sparseness idea could lead to the invention of the idea of
time-sequence.

That doesn't mean that Kant conclusion, that some time-machinery is
innate, must be wrong - only that his reasoning probably is wrong.  I
would expect that the brain evolved some special innate machinery to
make it easy to deal with sequences - e.g., because CHAINING is so
useful in general.  But Kant did not appreciate the full potential of
symbolic machinery that could assemble more of the same, so presumably
he could not see how such concepts could be discovered by exploration,
rather than having to be built in.

∂29-Jan-83  1608	BATALI @ MIT-MC 	Tarskian Coherence
Received: from MIT-MC by SU-AI with NCP/FTP; 29 Jan 83  16:08:23 PST
Date: Saturday, 29 January 1983  19:03-EST
Sender: BATALI @ MIT-OZ
From: BATALI @ MIT-MC
To:   phil-sci @ MIT-OZ
Subject: Tarskian Coherence


My feelings about the relevence of Tarskian semantics seem certainly
not to be the establishment view if the establishment says that
Tarskian semantics says anything about the correspondence theory of
truth.  I don't think that is does.  In fact, as I indicated a while
ago, I think that this approach seems more in line with the coherence
theory. 

One aspect of the Tarskian theory of the recursive definition of truth
wherein the truth of a sentence is defined to depend on the truth of
its constituents and the fact that they are combined according to
sound inference rules.  The soundness of inference rules is defined in
terms of the model of the theory, where the model is just another
mathematical theory.  This all sounds to me like Tarski requires that
some enormous set of mathematical statements must be coherent.  IF the
worls is a set of mathematical statements, then one could take
Tarskian semantics as consistent with a correspondence theory of
truth.

My realist bones tell me that the world is not a set of mathematical
statements, no matter if it can be described that way.  Thus I don't
see the world as a model for a mathematical theory.  But various
descriptions of the world must be coherent.  It is hard enough to
maintain that the world is objective.  It is a further difficulty to
maintain that the objective world is in some particularly helpful form
-- namely as a set of nice mathematical statements.  What is the
buzzword for such a position? Is it "formalism"?  Well I don't buy it.
Until someone convinces me otherwise, it seems reasonable to suppose
that the world consists of clouds and cows and sitting ducks.  Not
formal sentences.

∂29-Jan-83  1614	MINSKY @ MIT-MC 	Definitions of "innate"
Received: from MIT-MC by SU-AI with NCP/FTP; 29 Jan 83  16:13:49 PST
Date: Saturday, 29 January 1983  19:06-EST
Sender: MINSKY @ MIT-OZ
From: MINSKY @ MIT-MC
To:   DAM @ MIT-OZ
Cc:   phil-sci @ MIT-OZ
Subject: Definitions of "innate"
In-reply-to: The message of 29 Jan 1983  14:36-EST from DAM


DAM:	I call an aspect of behavior innate in an organism if that
aspect is exhibited by that organism independent of the developmental
environment.  Of course I am only interested in plausible
developmental environments, ones in which there is light to see and a
long established community of people to interact with.

OK.  I think that there ought to be a word for this concept, all
right, and it is not what I meant by innate in the previous
discussion.  Also, I am afraid that the idea of "plausible"
environment may obscure most of the issues we've been quarreling
about.  But perhaps we can find a set of distinctions that will
be helpful here.  Perhaps some of the philosophers have indeed
made some that I don't know, e.g., in that maze of "epmirical",
"a posteriori", etc.

However, the point that I have been droning about so much is this: it
seems to me that now that we have a lot of procedural concepts, we
might do better to make some new definitions, e.g., "innate-X" means
"almost inevitably produced by the cognitive mechanism under
conditions X", and so on.

I want also to express my admiration for DAM's elegance (and
willingness) to supply crisp, intelligible formulations when things
break down.

∂29-Jan-83  1634	BATALI @ MIT-MC 	Kant was a smart fella, honest.  
Received: from MIT-MC by SU-AI with NCP/FTP; 29 Jan 83  16:34:23 PST
Date: Saturday, 29 January 1983  19:24-EST
Sender: BATALI @ MIT-OZ
From: BATALI @ MIT-MC
To:   MINSKY @ MIT-OZ
Cc:   GAVAN @ MIT-OZ, JCMa @ MIT-OZ, phil-sci @ mc
Subject: Kant was a smart fella, honest.
In-reply-to: The message of 29 Jan 1983  18:59-EST from MINSKY

    From: MINSKY

    That doesn't mean that Kant conclusion, that some time-machinery is
    innate, must be wrong - only that his reasoning probably is wrong.  I
    would expect that the brain evolved some special innate machinery to
    make it easy to deal with sequences - e.g., because CHAINING is so
    useful in general.  But Kant did not appreciate the full potential of
    symbolic machinery that could assemble more of the same, so presumably
    he could not see how such concepts could be discovered by exploration,
    rather than having to be built in.

The key to Kant's conclusion was not the particular mechanisms he
claimed were innate but rather the reasoning he used to demonstrate
that some such mechanisms had to be a priori.  So correct were his views
that they are being taken as given by all of us on this list,
including the above passage by Marvin.  Kant did indeed understand
that the mind needed only some simple innate ideas like that of space
and time to create more complicated mechanisms of understanding.  He
would have been happy, I think, to see how our understanding of
computation makes his views even more plausible.

Innate knowledge is not the same as a priori knowledge.  A priori
knowledge is that knowledge that we must have to understand the world,
it does not depend on any particular facts about the world.  Knowledge
about time is thus a priori whether or not it is innate.  It often may
take a great deal of learning to know a priori facts: such as, for
example, mathematical facts.

Kant did not argue that the particular a priori knowledge we use was
logically necessary in the sense that there is no other possible way
to view the world.  What he did show was that for an agent, that
particular a priori knowledge it possesses would be necessarily true
for that agent.  The necessity is not logical because things COULD be
some other way.  The necessity is, in Kant's term: trancendental,
because the agent needs to believe it to believe anything at all.

Both Minsky's "Learning Meaning" and Kant's "Critique of Pure Reason"
are concerned with uncovering the form of trancendental knowledge.

∂29-Jan-83  1705	John McCarthy <JMC@SU-AI> 	innateness, sentences, etc.      
Received: from MIT-MC by SU-AI with NCP/FTP; 29 Jan 83  17:04:40 PST
Date: 29 Jan 83  1627 PST
From: John McCarthy <JMC@SU-AI>
Subject: innateness, sentences, etc.  
To:   phil-sci@MIT-OZ  

	Chomsky and Piaget and others consider many aspects of human
behavior to be innate, often disagreeing about just what is innate.
Even if each of them has defined, as exactly as he can, what he means
by innate, we may be too lazy to find the precise reference.  It is
unsound, however, to attribute the silliest meaning we can think of,
even though it may temporarily massage the ego.  It is a much better
approximation to attribute the most sensible meaning we can think of,
although if precision is sought, there is no substitute for reading
the literature.

	Piaget, I think, thought that certain concepts were innate
in the sense that they arise at a certain stage of development of
all normal humans.  I don't know what qualifications he made about
how normal the environment had to be.

	With regard to sentences, Minsky's off hand reference to
what I said was correct - if I interpret him correctly.  I doubt
that sentences are innate in the following sense.  1. If adults
brought up a child never uttering sentences, the child might not
come to use them.  2. A population of children initiallized without
sentences would develop them in time.  I have no opinion about whether
this development would take place at the age of ten in the first
generation or would be an invention after several generations.  I
suspect this differs from Chomsky's opinion, because his ideas about
innate universal grammar involve sentences.

	To the extent that I understand Chomsky's argument I consider
it faulty.  He argues that an innate universal grammar is required,
because a child acquires the grammars of their native language from
a small amount of experience, and this grammar permits judging
the grammaticality of an indefinitely large collection of sentences.
It seems possible to me that a major evolutionary step towards
human intelligence occurred when the output of a pattern recognition step
could be fed back into the input and combined with early data.
This is a step beyond the simple chain suggested by anatomy, where
the first visual or auditory cortex passes signals to the second,
which transforms them and passes them on but never back to its own
inputs.  However, this capability is needed for thought processes
apart from language and might be a general intellectual mechanism
developed earlier than language.  The Chomsky strategy of studying
grammar first and thought later wouldn't uncover it.  Perhaps I
misrepresent Chomsky's point of view.  In principle, the point is
testable by looking for either behavioral or anatomical evidence
for such feedback processes.  For example, mental goal-seeking is
often a top down
process analogous to top down parsing.  "In order to achieve  C, I
need to perform an action that has preconditions  B and B', which requires
actions that have preconditions ... ".

	DAM interprets me correctly as believing that some early written
languages lacked sentences, although the oral languages of the
same people included sentences.  In fact this is known.  The oral
language of the Aztecs included sentences, although their inscriptions
didn't.  Much that is known of their culture was written by Aztec
priests, etc., after the Spanish taught them to write their own
oral language in the Latin alphabet.

	I am even willing to entertain the possibility that the first oral
languages didn't have sentences.  I would also suspect that some present
day primitive oral languages are impoverished in some respect, especially
in the way they are ordinarily used.  The anthropologists and bible
translators who study these languages may mistakenly impose European
linguistic categories on them.

	"I bet GAVAN make big fella mistake Friday" in his speculation
about the Creole languages - for any of the possible meanings of "Creole".
Consult Webster's Collegiate for at least four.  The Creole languages
weren't invented by masters for the use of slaves.  Moreover, I'll bet
that all of them can distinguish past events from present events when this
is necessary for the communication without ever using tenses.

	Apart from that it would be a mistake for GAVAN to suppose that we
advocates of the correspondence theories consider ourselves defeated.  We
merely have trotted out all the arguments we care to and don't want to
repeat ourselves.  Therefore, it is pointless for him to repeat flat
assertions he has made already about the meaningless of
correspondence statements.

∂29-Jan-83  1809	ISAACSON at USC-ISI 	Re:  What is a chair?   
Received: from MIT-MC by SU-AI with NCP/FTP; 29 Jan 83  18:08:37 PST
Date: 29 Jan 1983 1753-PST
Sender: ISAACSON at USC-ISI
Subject: Re:  What is a chair?
From: ISAACSON at USC-ISI
To: GAVAN at MIT-MC
Cc: minsky at MIT-MC, phil-sci at MIT-MC, isaacson at USC-ISI
Message-ID: <[USC-ISI]29-Jan-83 17:53:47.ISAACSON>


In-Reply-To: Your message of Saturday, 29 Jan 1983, 16:56-EST


I think you make some valid points.  But, allow me to say, it is
probably premature to start here discussions of the fine detail
separating pragmatism and "instrumentalism" in the absence of a
ground swell to discuss pragmatism in the first place.  [You see,
I've got to be true to *my* pragmatic principle...]

Unless we hear from others on these matters, it is best, I think,
to discuss these on the other list, i.e., Phaneron at MIT-MC.


∂29-Jan-83  2057	MINSKY @ MIT-MC 	Kant was a smart fella, honest.  
Received: from MIT-MC by SU-AI with NCP/FTP; 29 Jan 83  20:57:20 PST
Date: Saturday, 29 January 1983  23:49-EST
Sender: MINSKY @ MIT-OZ
From: MINSKY @ MIT-MC
To:   BATALI @ MIT-OZ
Cc:   GAVAN @ MIT-OZ, JCMa @ MIT-OZ, phil-sci @ mc
Subject: Kant was a smart fella, honest.
In-reply-to: The message of 29 Jan 1983  19:24-EST from BATALI



I am trying to appreciate what your description of Kant might mean.
Let me try to explain just once more what I think is the difficulty
that makes his smartness seem somewhat moot to me.  Perhaps the most
poignant aspect is that developmental one, again.

BATALI: A priori knowledge is that knowledge that we must have to
     understand the world, it does not depend on any particular facts
     about the world.

To me the problem here is in who is "we".  Does Kant have the
idea of "single self" that, for me, makes most work on the philosophy
of mind, prior to Freud, applicable only to adults doing methematics?

      Kant did indeed understand that the mind needed only some simple
     innate ideas like that of space and time to create more complicated
     mechanisms of understanding.

That's what I don't understand.  Do you mean that the mind needs
enough machinery to invent whatever it needs to understand whatever it
does however well it does?

      What he did show was that for an agent, that particular a priori
      knowledge it possesses would be necessarily true for that agent.


All I can make of this is that there was some idea, which seems
unrealistic to me, of "necessarily true" that he was concerned with.
Do you think it was a sound idea, given that children use logic
scarcely at all and adults make mistakes in it?

Finally, doesn't it all contain a mystical vision that one is born
with a mind which persists through life unchanged, except for its
contents?  Consider the idea of "learning":

     It often may take a great deal of learning to know a priori
     facts: such as, for example, mathematical facts.

If you understood what I have said in my papers, you will understand
that the very idea of "learning" is an approximation.  We should use
it only in the sense of perturbation theory, over short times and
small changes.  The larger concept of "development" as we use it now,
views the mind itself as growing and changing.  My question, again, is
why you think those old ideas - whatever the quality of the reasoning
based on the pre-freud, pre-piagetian, assumptions, is worth a lot of
study.  

(I'm not saying they weren't revolutionary once.  So was Alchemy, and
so Pythagorean mathematics.  A good deal of old philosophical thought
was absorbed, and even changed scientific and psychological
paradigms.  I'm only saying that they were just too primitive. 
Batali's summaries of Kant's thought sound to me clevel for his time,,
but that time is past - for understanding mind and meaning.)

∂29-Jan-83  2209	KDF @ MIT-MC 	The Objectivity of Mathematics 
Received: from MIT-MC by SU-AI with NCP/FTP; 29 Jan 83  22:08:59 PST
Date: Sunday, 30 January 1983  00:33-EST
Sender: KDF @ MIT-OZ
From: KDF @ MIT-MC
To:   GAVAN @ MIT-OZ
Cc:   DAM @ MIT-OZ, MINSKY @ MIT-OZ, phil-sci @ MIT-OZ
Subject: The Objectivity of Mathematics
In-reply-to: The message of 29 Jan 1983  11:23-EST from GAVAN

	From: GAVAN
    Why do you say that the theory of how it [an innate logical mechanism]
    operates doesn't depend on the way that we judge?  Don't all our
    theories depend (to some extent at least) on the way that we judge?
    How can we even answer this question without a theory of how to judge?
    Hmmmmm.....

No.  For example, there are theories which I found bonkers upon first
exposure, but further accquaintence with the theory and the evidence
for it convinced me that my taste had to change, rather than rejecting
it.  In approaching any particular area, there are some methodological
assumptions that one must make, and indeed different theories can
result.  For scientific theories, however, there is usually enough
shared assumptions (controls are important, for example) that at least
in some cases there is agreement on whether or not fact X supports,
counters, demolishes, or is irrelevant to, theory Y.  Usually the
subject of study is not epistemology, so there is a fairly natural
seperation between the theory of evidence and the theory of what is
being theoriezed about.
	This brings up a point about why I don't spend my time reading
philosophy.  Ideas in philosophy are tested by argument and debate;
there is no guarentee that the winner won't be the person with the
best skills in rhetoric.  While doing science doesn't prohibit a
similar outcome, bashing one's head against the world in one way or
another helps crystalize ideas and firm them up.  No matter how smart
someone is, they may not be able to overcome incorrect intuitions
unless they experiment and find that they just don't work.  Until now
we haven't had any decent tools to perform such experiments (and about
half the time I think we still don't...), so it's not surprising that
"Kant, Hegel, etc.."  may have very little to say to us.
		Ken

∂29-Jan-83  2319	MINSKY @ MIT-MC 	innateness, sentences, etc.      
Received: from MIT-MC by SU-AI with NCP/FTP; 29 Jan 83  23:19:07 PST
Date: Saturday, 29 January 1983  23:32-EST
Sender: MINSKY @ MIT-OZ
From: MINSKY @ MIT-MC
To:   John McCarthy <JMC @ SU-AI>
Cc:   phil-sci @ MIT-OZ
Subject: innateness, sentences, etc.  


JMC: ...when the output of a pattern recognition could be fed back
     into the input and combined with early data.
     ...  However, this capability is needed for thought processes
     apart from language and might be a general intellectual mechanism
     developed earlier than language.  The Chomsky strategy of studying
     grammar first and thought later wouldn't uncover it.

That's just what I meant.  The idea that certain uniformities of
language must be "linguistic universals" is what I deplore about
Chomskian thinking.  In my extremest moments I attribute it to such
persons' need to feel that their subject is as self-contained and
hence as scientifically respectable as mathematics.

∂30-Jan-83  0817	John Batali <Batali at MIT-OZ> 	Kant: no dummy    
Received: from MIT-MC by SU-AI with NCP/FTP; 30 Jan 83  08:16:54 PST
Date: Sunday, 30 January 1983, 11:00-EST
From: John Batali <Batali at MIT-OZ>
Subject: Kant: no dummy
To: MINSKY at MIT-MC, BATALI at MIT-OZ
Cc: GAVAN at MIT-OZ, JCMa at MIT-OZ, phil-sci at MIT-MC
In-reply-to: The message of 29 Jan 83 23:49-EST from MINSKY at MIT-MC


    From: MINSKY @ MIT-MC

    BATALI: A priori knowledge is that knowledge that we must have to
	 understand the world, it does not depend on any particular facts
	 about the world.

    To me the problem here is in who is "we".  Does Kant have the
    idea of "single self" that, for me, makes most work on the philosophy
    of mind, prior to Freud, applicable only to adults doing
    methematics?

It doesn't matter.  A priori is a priori whether for a single agent, the
"we" of a multiple agent theory or the we of the scientific community.
The point is that there is some knowledge we must have before the world
makes any sense at all.  This is Kant's big point.

    That's what I don't understand.  Do you mean that the mind needs
    enough machinery to invent whatever it needs to understand whatever it
    does however well it does?

Yes.  Seems obvious to us computationalists, doesn't it.  Kant was
right.  The key point is the understand that it is something like
"machinery."

	  What he did show was that for an agent, that particular a priori
	  knowledge it possesses would be necessarily true for that agent.

    All I can make of this is that there was some idea, which seems
    unrealistic to me, of "necessarily true" that he was concerned with.
    Do you think it was a sound idea, given that children use logic
    scarcely at all and adults make mistakes in it?

The point has nothing to do with logic.  The point has to do with what
we know and how we know it.  And that if, for example, we organize the
world in terms if time and space, then we can't make any sense of the
world without that organization.  So those ideas are necessary for us.
As I mentioned: its not logical necessity but something like "practical"
or "rational" necessity.

    Finally, doesn't it all contain a mystical vision that one is born
    with a mind which persists through life unchanged, except for its
    contents?  Consider the idea of "learning":

	 It often may take a great deal of learning to know a priori
	 facts: such as, for example, mathematical facts.

    If you understood what I have said in my papers, you will understand
    that the very idea of "learning" is an approximation.  We should use
    it only in the sense of perturbation theory, over short times and
    small changes.  The larger concept of "development" as we use it now,
    views the mind itself as growing and changing.

Yes.  The mind constructs itself over time.  What does it need to know
and know how to do in order to do this?  At any point in time how can it
use what it knows to continue the process?

    My question, again, is
    why you think those old ideas - whatever the quality of the reasoning
    based on the pre-freud, pre-piagetian, assumptions, is worth a lot of
    study.  

What is it about these two people, in particular, that make them the
place to start?  The tradition of worrying about the mind stretches
unbroken back at least to Plato.  Many important distinctions have been
made along the way, many important observations, false starts, and
arguments have been produced.

It may be that the only thing that seperated  Plato, or Hume, or
Leibniz, or Kant, or Hegel, or Husserl from "success" was the lack of an
understanding of computation.  They certainly understood a lot else.

It is my hope that the invention of the computer has finally given us
the means to actually test these ideas.  Perhaps the computer is the
tool we need to actually bring some to fruition.

∂30-Jan-83  1045	GAVAN @ MIT-MC 	meta-epistemology, philosophy of science, innateness, and learning   
Received: from MIT-MC by SU-AI with NCP/FTP; 30 Jan 83  10:45:13 PST
Date: Sunday, 30 January 1983  13:31-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   MINSKY @ MIT-OZ
Cc:   JCMa @ MIT-OZ, phil-sci @ mc
Subject: meta-epistemology, philosophy of science, innateness, and learning
In-reply-to: The message of 29 Jan 1983  18:59-EST from MINSKY

    From: MINSKY

    GAVAN: When I use the term "innate" I refer to those concepts and
         abilities that must pre-exist in the child at birth if it is to
         develop.  Innate mental abilities, then, would be the necessary
         conditions of knowledge.  I agree with Kant on this one.  In
         order to have any knowledge of anything in the world, I must first
         possess the ability to detect differences in space and time.  So
         the concepts of space and time are pure and a priori.

    Well, I would say that the idea of "pre-exist" is problematical.  What
    would seem necessary is that there must pre-exist machinery that would
    permit learning those concepts.  The original machinery need not
    have anything like those "concepts" to begin with.

Well, maybe, depending upon what you think a concept is.  What do you
mean by "machinery" other than a concept?  How is the infant to learn
anything at all, including space and time, without an innate sense of
space and time?

    I wonder how Kant would have reacted to Piaget's discoveries about
    how little the infant knows about space and time at the start.

He might ask, "How does Piaget know how little they knew?  Does he ask
them?"

    I do believe that such machine would need to begin with ways to
    deal with differences.  I do not see that it is NECESSARY, though,
    to begin with an "innate" sense of time-difference.  The reason is that
    the sparseness idea could lead to the invention of the idea of
    time-sequence.

Why choose time, then, as a dimension to detect difference along?
Because it's there?  Well, maybe.  But how do we know that time is
there?  Probably biologically.  You would start with difference
detection across space, then?  Doesn't the infant need to know
something about time in order to put objects in its mouth?

    That doesn't mean that Kant conclusion, that some time-machinery is
    innate, must be wrong - only that his reasoning probably is wrong.  I
    would expect that the brain evolved some special innate machinery to
    make it easy to deal with sequences - e.g., because CHAINING is so
    useful in general.  But Kant did not appreciate the full potential of
    symbolic machinery that could assemble more of the same, so presumably
    he could not see how such concepts could be discovered by exploration,
    rather than having to be built in.

I think you're wrong here.  Leibniz appreciated the potential of
symbolic machinery, and since Kant studied Leibniz, he probably did
too.  If the concepts of space and time are not innate, where does the
infant explore?

∂30-Jan-83  1105	GAVAN @ MIT-MC 	Sentences
Received: from MIT-MC by SU-AI with NCP/FTP; 30 Jan 83  11:05:03 PST
Date: Sunday, 30 January 1983  13:04-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   ←Bob <Carter @ RUTGERS>
Cc:   phil-sci @ MIT-OZ
Subject: Sentences
In-reply-to: The message of 29 Jan 1983  18:04-EST () from ←Bob <Carter at RUTGERS>

    From: ←Bob <Carter at RUTGERS>

        From: GAVAN @ MIT-MC

        Linguistic rules are a cover for political rule.

    This is a remarkable statement.  

What do you find remarkable about it?

        The Creoles had no past tense because their masters didn't want them
        to know the history of their oppression.  

    Do you have a citation for that extraordinary assertion of fact?  Or
    were you just trying to keep your audience awake?

The latter.  It's hearsay from JCMA who learned about it in a
discussion he had with a linguist in Guadaloupe this winter.  Don't
know if there's anything written on it, but if you're interested check
into the Creole of Guadaloupe and Martinique.

∂30-Jan-83  1128	DAM @ MIT-MC 	Tarskian Semantics   
Received: from MIT-MC by SU-AI with NCP/FTP; 30 Jan 83  11:28:03 PST
Date: Sunday, 30 January 1983  14:13-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   Carter @ RU-GREEN
cc:   phil-sci @ MIT-OZ
Subject: Tarskian Semantics


	Date: 29 January 1983  18:19-EST (Saturday)
	From: ←Bob <Carter at RUTGERS>

	    From: DAM @ MIT-MC
	    To:   phil-sci @ MIT-OZ
	    Re:   Tarskian Semantics

		   Also it is well known that for most formal languages
	    there are indestinguishible models, two models such that
	    every formal sentence has the same truth value on both.  

	Is this notion briefly explicable in common sense terms?  Or was it
	your point that it is not?  If it is, could you explicate it for me?

	I think this notion is actually quite simple if you are
willing to ignore some technical details.  Consider the first order
predicate calculus (the details of the language are irrelevent here I
simply want to talk about some concrete formal language).  A model for
the first order predicate calculus is a first order structure (it is
fairly important to know what a first order structure is).  For
example the natural numbers are a first order structure.  In general a
first order structure is domain (a universe of discurse such as the
numbers) and a designated set of functions and predicates over that
universe of discourse.
	A given sentence of first order predicate calculus is either
true or false OF A PARTICULAR FIRST ORDER STRUCTURE.  For example
there is a formal sentence which "says" "for all x there is a y such
that y is greater than x".  This sentence will be true in some
structures and false in others.  It will be true in several different
structures (this particular sentence is true of both the natural
numbers and the integers).
	Two structures are indestinguishable with respect to the first
order predicate calculus just in case for EVERY sentence Phi, either
Phi is true of both structrures or Phi is false in both structures.
Two structures which are indestinguishable with respect to first order
predicate calculus can often be destinguished by sentences in other
formal languages (and by precise informal statements).
	There is a stronger sense in which two structures can be
indestinguishable, namely if they are isomorphic.  Understanding the
precise notion of isomorphism as it applies to first order structures
is (I think) very important.  However it is somewhat harder to
communicate the precise definition of this notion and even harder to
communicate intuitions about this precise definition (one must embrace
the notion that isomorphic structures "are really identical" even
though they are not equal).  Isomorphic structures can not even be
destinguished by english mathematical statements (or at least this is
a good conjecture).

	I hope this has helped, David Mc

∂30-Jan-83  1220	John C. Mallery <JCMa at MIT-OZ> 	Principle of Charity in Argument and Creole Langauges   
Received: from MIT-MC by SU-AI with NCP/FTP; 30 Jan 83  12:20:03 PST
Date: Sunday, 30 January 1983, 15:01-EST
From: John C. Mallery <JCMa at MIT-OZ>
Subject: Principle of Charity in Argument and Creole Langauges
To: John McCarthy at su-ai
Cc: phil-sci at mc
In-reply-to: The message of 29 Jan 83 19:27-EST from John McCarthy <JMC at SU-AI>

    Date: 29 Jan 83  1627 PST
    From: John McCarthy <JMC@SU-AI>
    Subject: innateness, sentences, etc.  
    To:   phil-sci@MIT-OZ  

    It is unsound, however, to attribute the silliest meaning we can
    think of, even though it may temporarily massage the ego.  It is a
    much better approximation to attribute the most sensible meaning we
    can think of, although if precision is sought, there is no
    substitute for reading the literature.

JMC is right on target here.  This is the "principle of charity" in
rhetoric.  It holds that the adversary in an argument must be given the
benefit of the doubt.  That means, don't "straw-man" the adversary by
criticizing the most simple-minded version of the argument.  Rather,
criticize the strongest form of the argument. Failure to adhere to the
principle of charity undermines the strength of the counter-argument.


	    "I bet GAVAN make big fella mistake Friday" in his speculation
    about the Creole languages - for any of the possible meanings of "Creole".
    Consult Webster's Collegiate for at least four.  The Creole languages
    weren't invented by masters for the use of slaves.  Moreover, I'll bet
    that all of them can distinguish past events from present events when this
    is necessary for the communication without ever using tenses.

Creole languages in this case refers to languages which evloved from
pigeon french in the Caribbean.  The point is that there are no tenses
for the past.  This is in sharp contrast to continental french!!
Past-tense information must be conveyed through non-syntactic
mechanisms.  Pigeon french was invented in order to make communication
between master's and slaves possible.  That means it was jointly
invented.  It also turned out to be useful for slave-slave
communication, as the blacks spoke widely varying African languages.
One interesting point here is how semantics can make up for
underdeveloped syntax.  Some linguists view the degree of
sophisitication of a language as the degree to which the language
"compiles" syntactically decidable information into its syntax, rather
than forcing the speaker to work harder, decoded it semantically.  Those
same linguist view french as one of the most sophisiticated languages.

∂30-Jan-83  1246	John C. Mallery <JCMa at MIT-OZ> 	Chomsky, Fodor, Innateness
Received: from MIT-MC by SU-AI with NCP/FTP; 30 Jan 83  12:42:55 PST
Date: Sunday, 30 January 1983, 15:28-EST
From: John C. Mallery <JCMa at MIT-OZ>
Subject: Chomsky, Fodor, Innateness
To: MINSKY at MIT-MC
Cc: phil-sci at mc
In-reply-to: The message of 29 Jan 83 23:32-EST from MINSKY at MIT-MC
Supersedes: The message of 30 Jan 83 15:27-EST from John C. Mallery <JCMa at MIT-OZ>

    From: MINSKY @ MIT-MC
    Subject: innateness, sentences, etc.  

    JMC: ...when the output of a pattern recognition could be fed back
	 into the input and combined with early data.
	 ...  However, this capability is needed for thought processes
	 apart from language and might be a general intellectual mechanism
	 developed earlier than language.  The Chomsky strategy of studying
	 grammar first and thought later wouldn't uncover it.

    That's just what I meant.  The idea that certain uniformities of
    language must be "linguistic universals" is what I deplore about
    Chomskian thinking.  In my extremest moments I attribute it to such
    persons' need to feel that their subject is as self-contained and
    hence as scientifically respectable as mathematics.

Right on, Marvin.  Why don't you flame about the behaviorism in Chomsky
while you're at it?

The Chomsky strategy deproblematizes the conditions whereby grammatical
behavior becomes possible.  This makes thought, then, a non-issue for
Chomskians, which is what got the innateness debate going in the first
place.  Chomsky was trying to save himself.  This would seem to be an
example of labelling that which one has no theory about as being innate.

If grammatical behavior is not innate, then the syntactic explanations
of it would be in serious trouble.  Aren't Fodor's arguments about
mentalese just an effort to prop up the Chomskian position [Or gain
credibility through linkage to an established tradition]?  If this is
the case, it sure seems that the analytical position on philosophy of
mind which derives from Fodor is hopelessly lost.  Comments?

∂30-Jan-83  1246	John McCarthy <JMC@SU-AI>
Received: from MIT-MC by SU-AI with NCP/FTP; 30 Jan 83  12:45:58 PST
Date: 30 Jan 83  1233 PST
From: John McCarthy <JMC@SU-AI>
To:   dam@MIT-OZ
CC:   phil-sci@MIT-OZ    

	It seems to me that DAM is doing those who have not studied
mathematical logic a disservice with his substantially correct but
informal explanation of first order indistinguishability of structures.
The ideas are precise and beautiful, but they require some study for
understanding.  A person with a completely accepting attitude can get some
notion from such a non-technical exposition, but a person who finds them
in conflict with his own ideas will haggle vaguely rather than readjust
his intuition.

	The best source I know for what can and cannot be formalized in
first order logic is the first chapter, by Jon Barwise, in the Handbook of
Mathematical Logic.  Even that requires a slight acquaintance with group
theory in order to understand the examples.  Unfortunately, the paperback
edition of the book, the rest of which is much more technical, costs $39.
Good beginning books are Robert Rogers's "Mathematical Logic and
Formalized Theories" and Patrick Suppes's "Introduction to Mathematical
Logic"; the former is specifically oriented toward philosophers.  The text
most referred to by mathematical logicians is Joseph Shoenfield's
"Mathematical Logic".

	Philosophers have a much greater tendency than mathematicians and
scientists to base their work on past formulations.  It seems to me that
one reason, besides habit, is that the ideas are unclear.  If you don't
read and quote Aristotle, someone will accuse you of getting something
important wrong.  No-one will accuse you of getting calculus or Newtonian
mechanics wrong just because you haven't read Leibniz or Newton or
propositional calculus wrong because you haven't read Boole.  This
phenomenon is a weakness of philosophy.

	My own opinion is that ideas from mathematical logic, computing,
and even artificial intelligence research are essential to anyone who
wants to study epistemology.  I also believe that its problems will be
solved as decisively as Newton solved the problems of mechanics, and the
proof will be computer programs that can make scientific discoveries.
Reading past philosophers, and probably even present day artificial
intelligence researchers, will not be necessary in order to understand how
the programs work, although it will be needed to understand the history of
the subject.  There is an enormous amount of somewhat relevant past
philosophy, but it is probably a better strategy to concentrate on recent
work and above all, to think directly about the problems rather than about
winning debates on behalf of one's already held beliefs.

	Incidentally, the results DAM cites on indistinguishable
structures depend heavily on using first order logic with the further
restriction that the only individuals in the domain of the logic are the
elements of the set being studied.  An example is the statement, proved in
Barwise, that any sentence true in all torsion free groups is true in some
group with torsion, and therefore torsion freeness is not a first order
concept.  Torsion freeness can be formalized in second order logic or in
set theory or even in a different first order theory in which subgroups
are permitted to be objects.  Why the interest in first order logic then?
Because it is complete (Godel), and because those concepts that have first
order formalizations are worth distinguishing for mathematical reasons.  I
am sure, however, that AI will not want to restrict itself to elementwise
first order formulations, and I doubt that they will have any special
importance for AI.

	A final recommendation: Aaron Sloman's book "The computer
revolution in philosophy".  Most of what he says, I agree with.

∂30-Jan-83  1227	DAM @ MIT-MC 	innateness 
Received: from MIT-MC by SU-AI with NCP/FTP; 30 Jan 83  12:26:57 PST
Date: Sunday, 30 January 1983  15:12-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   MINSKY @ MIT-OZ
cc:   phil-sci @ MIT-OZ
Subject: innateness


	Date: Saturday, 29 January 1983  19:06-EST
	From: MINSKY

	It seems to me that now that we have a lot of procedural concepts, we
	might do better to make some new definitions, e.g., "innate-X" means
	"almost inevitably produced by the cognitive mechanism under
	conditions X", and so on.

OK  How about calling this notion "universal over X" as in "universal
over plausible human environments".  However it is important to keep
in mind that this is one possible reading of the word "innate".

	David Mc

∂30-Jan-83  1248	John McCarthy <JMC@SU-AI> 	my error     
Received: from MIT-MC by SU-AI with NCP/FTP; 30 Jan 83  12:48:43 PST
Date: 30 Jan 83  1240 PST
From: John McCarthy <JMC@SU-AI>
Subject: my error 
To:   dam@MIT-OZ
CC:   phil-sci@MIT-OZ    

I misquoted the result on indistinguishability.  The correct statement is
"The set of first-order sentences true in all torsion abelian groups is
true in some abelian group  H  which is not torsion".  These are first order
sentences where the individuals are the elements of the group and the constant
symbols of the language are just the group operation and the identity element.


∂30-Jan-83  1144	John C. Mallery <JCMa at MIT-OZ> 	meta-epistemology, philosophy of science, innateness, and learning
Received: from MIT-MC by SU-AI with NCP/FTP; 30 Jan 83  11:44:12 PST
Date: Sunday, 30 January 1983, 14:37-EST
From: John C. Mallery <JCMa at MIT-OZ>
Subject: meta-epistemology, philosophy of science, innateness, and learning
To: MINSKY at MIT-MC
Cc: phil-sci at mc
In-reply-to: The message of 29 Jan 83 18:59-EST from MINSKY at MIT-MC


    From: MINSKY @ MIT-MC
    Subject: meta-epistemology, philosophy of science, innateness, and learning
    In-reply-to: The message of 29 Jan 1983  14:03-EST from GAVAN

    I do believe that such machine would need to begin with ways to
    deal with differences.  I do not see that it is NECESSARY, though,
    to begin with an "innate" sense of time-difference.  The reason is that
    the sparseness idea could lead to the invention of the idea of
    time-sequence.

What is the field over which the "innate" difference dectector notes
differences?  What's your method for generating the idea for
time-sequence from those difference detectors?  If you can answer the
last question (or it is possible to answer it), then time is not
"innate."  In that case what about space?

∂30-Jan-83  1239	John C. Mallery <JCMa at MIT-OZ> 	innateness, sentences, etc.    
Received: from MIT-MC by SU-AI with NCP/FTP; 30 Jan 83  12:39:34 PST
Date: Sunday, 30 January 1983, 15:27-EST
From: John C. Mallery <JCMa at MIT-OZ>
Subject: innateness, sentences, etc.  
To: MINSKY at MIT-MC
Cc: phil-sci at mc
In-reply-to: The message of 29 Jan 83 23:32-EST from MINSKY at MIT-MC

    From: MINSKY @ MIT-MC
    Subject: innateness, sentences, etc.  

    JMC: ...when the output of a pattern recognition could be fed back
	 into the input and combined with early data.
	 ...  However, this capability is needed for thought processes
	 apart from language and might be a general intellectual mechanism
	 developed earlier than language.  The Chomsky strategy of studying
	 grammar first and thought later wouldn't uncover it.

    That's just what I meant.  The idea that certain uniformities of
    language must be "linguistic universals" is what I deplore about
    Chomskian thinking.  In my extremest moments I attribute it to such
    persons' need to feel that their subject is as self-contained and
    hence as scientifically respectable as mathematics.

Right on, Marvin.  Why don't you flame about the behaviorism in Chomsky
while you're at it?

The Chomsky strategy deproblematizes the conditions whereby grammatical
behavior becomes possible.  This makes thought, then, a non-issue for
Chomskians, which is what got the innateness debate going in the first
place.  Chomsky was trying to save himself.  This would seem to be an
example of labelling that which one has no theory about as being innate.

If grammatical behavior is not innate, then the syntactic explanations
of it would be in serious trouble.  Aren't Fodor's arguments about
mentalese just an effort to prop up the Chomskian position [Or gain
credibility through linkage to an established tradition]?  If this is
the case, it sure seems that the analytical position on philosophy of
mind which derives from Fodor is hopelessly lost.  Comments?

∂30-Jan-83  1311	DAM @ MIT-MC 	Tarskian Semantics   
Received: from MIT-MC by SU-AI with NCP/FTP; 30 Jan 83  13:10:54 PST
Date: Sunday, 30 January 1983  15:07-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   BATALI @ MIT-OZ
cc:   phil-sci @ MIT-OZ
Subject: Tarskian Semantics


	Date: Saturday, 29 January 1983  19:03-EST
	From: BATALI

	The soundness of inference rules is defined in terms of the model of
	the theory, where the model is just another mathematical theory.  This
	all sounds to me like Tarski requires that some enormous set of
	mathematical statements must be coherent.  IF the worls is a set of
	mathematical statements, then one could take Tarskian semantics as
	consistent with a correspondence theory of truth.

	Until someone convinces me otherwise, it seems reasonable to suppose
	that the world consists of clouds and cows and sitting ducks.  Not
	formal sentences.

	It sounds to me like you are confused about the nature of Tarskian
semantics.  Tarksian semantics defines a formal relationship between
a sentence and "something else".  That "something else" is ALMOST NEVER thought
of as a set of sentences.  The only reason that this "something else" is not
taken to be clouds and cows is that mathematicians don't talk about clouds
and cows.  However this something else can be a set of numbers, a wave
function, a computer program, a turing machine, a bit string, a formal
language etc.  Tarskian semantics relates sentences and worlds not sentences
and sentences.  The only problem is that mathematicians restrict their
discussions to certain kinds of worlds.
	The formal recursive character of Tarskian semantics is not
important (in my view).  What is important is that that there is SOME
precisely defined function which takes a sentence and a "world" and
gives "true" or "false".

	David Mc

∂30-Jan-83  1405	DAM @ MIT-MC 	innateness, sentences, etc.    
Received: from MIT-MC by SU-AI with NCP/FTP; 30 Jan 83  14:05:35 PST
Date: Sunday, 30 January 1983  16:41-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   MINSKY @ MIT-OZ
cc:   JCM @ SU-AI, phil-sci @ MIT-OZ
Subject: innateness, sentences, etc.


	Date: 29 Jan 83  1627 PST
	From: John McCarthy <JMC at SU-AI>

	To the extent that I understand Chomsky's argument I consider
	it faulty.

	(Feedback in concept formation) is needed for thought processes
	apart from language and might be a general intellectual mechanism
	developed earlier than language.  The Chomsky strategy of studying
	grammar first and thought later wouldn't uncover it.

	I would like to take this opportunity to agree that Chomsky's
arguments are not air tight.  Suppose there are martians and that they
are roughly the same stage of cognitive development (evolutionarily).
Chomsky would see no reason to suspect that martian linguistic
universals are similar to ours.  An alternative view is that any
mechanism for cognition would settle on the same linguistic universals
because of the sparseness of effective cognitive mechanisms.  The
theory that linguistic universals are actually universal over planets
is plausible given a sparseness theory of mechanisms that perform
certain needed functions.  Consider the parallel independent evolution
of eyes in three different phyla.
	I find it plausible (perhaps likely) that seperate
evolutionary lines would converge on SIMILAR basic cognitive and
linguistic mechanisms.  Thus there could be some linguistic features
which are universal over planets and are thus "innate" in the
evolutionary process.  There can also be some (larger?) set of
linguistic properties which are "universal over people" and thus
"innate" in humans.  Chomsky addresses only "universal over humans"
and ignores the potentially rich theory of "universals over planets".
As engineers rather than scientists AI researchers must be concerned
with things which are "universal over planets" (consider eyes as
derived from the laws of optics).
	A phenomenon which is universal over planets might be
developed in an organism by an extremely fast evolutionary process.
If there were only one species with eyes then the spareseness theory
might be a good theory of a particular organisms development of eyes.
However it is clear that many aspects of eyes, such as the number an
organism has, whether they are simple or compound, and the shape of
pupils, are only universal over a given species and are not universal
over planets.  Thus the sparseness theory of the development of eyes
in a given organism is clearly wrong (but it might have been right).
	If we take "innate" to mean "universal over people" then
Chomsky has defined himself into a safe position but has unfairly
dodged an important point.  It might be possible to study linguistics
starting with only simple computational notions and the assumption
that people are computational systems which are also cognitively
effective in problem solving and communication.  Such an approach
MIGHT get some where, especially if the human universals are actually
universals over planets.  Human development might be based on a kind
of fast evolutionary process which converges on evolutionary universals.
This does not rule out fixed universal (such as the structure of
the human eye) which do not develop this way.

	Does the sparseness theory explain Chomsky's X-bar linguistic
universals and the tautological mathematical truths?  I don't know but
I suspect not.  Even if certain aspects of language are universal over
planets (lenses seem to be universal in this sense) the sparseness
theory itself is not a very satisfactory explanation.  Does Marvin's
new theory provide the analoug of a theory of optics and focusing?

	David Mc

∂30-Jan-83  1410	DAM @ MIT-MC 	innateness, sentences, etc.    
Received: from MIT-MC by SU-AI with NCP/FTP; 30 Jan 83  14:10:05 PST
Date: Sunday, 30 January 1983  16:46-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   Minsky @ MIT-OZ
cc:   phil-sci @ MIT-OZ
Subject: innateness, sentences, etc.


	Date: Saturday, 29 January 1983  23:32-EST
	From: MINSKY

	In my extremest moments I attribute (the notion of linguistic
	universals) to (Chomsky's) need to feel that (linguistics) is
	as self-contained and hence as scientifically respectable
	as mathematics.

	I think you are absolutely correct about this (though I would
say physics rather than mathematics since linguistics is empirical rather
than tautological).  I think this "need" of Chomsky's is completely
laudible.

	David Mc

I don't see a need to make each science self-contained as laudable.  I
just was chairman of an economics PhD oral in which the candidate and
the professors hypothesized about what information corporations
find it optimal to give customers, ignoring the fact that there is a
huge (business school) literature on marketing covering precisely this
point.  Economists often mistakenly treat technology as a capital good 
which a firm buys a certain quantity of - ignoring any specific characteristics
of specific inventions and processes.

	In the present case, it isn't laudable for linguists to ignore
the relation between intelligence and problem solving on the one hand
and language on the other.
∂30-Jan-83  1418	MINSKY @ MIT-MC 	innateness, sentences, etc. 
Received: from MIT-MC by SU-AI with NCP/FTP; 30 Jan 83  14:18:44 PST
Date: Sunday, 30 January 1983  17:10-EST
Sender: MINSKY @ MIT-OZ
From: MINSKY @ MIT-MC
To:   DAM @ MIT-OZ
Cc:   phil-sci @ MIT-OZ
Subject: innateness, sentences, etc.
In-reply-to: The message of 30 Jan 1983  16:46-EST from DAM


The desire to do something that respectable is laudable.  The desire
to do it by declaring that linguistics is ("defined") to be
completable within the domain of sentences was in my view a bad
judgment, but was worth trying.  The substance of my grumble, then, is
the dogged and unreasonable persistence in that view after the middle
1960's which, in my view, retarded the development of a generation of
linguistic students.  I don't want to carry on about this, though;
only hindsight can show which paths a science "should" have pursued.

∂30-Jan-83  1424	KDF @ MIT-MC 	Innateness of Space and Time   
Received: from MIT-MC by SU-AI with NCP/FTP; 30 Jan 83  14:23:53 PST
Date: Sunday, 30 January 1983  17:09-EST
Sender: KDF @ MIT-OZ
From: KDF @ MIT-MC
To:   John C. Mallery <JCMa @ MIT-OZ>
Cc:   MINSKY @ MIT-OZ, phil-sci @ mc
Subject: Innateness of Space and Time
In-reply-to: The message of 30 Jan 1983 15:28-EST from John C. Mallery <JCMa>

	Part of the disagreement, I think, comes from thinking of
space and time as having a single representation.  The evidence so far
indicates otherwise.  Both are "continuous" things, and the criteria
for individuating them into discrete pieces that can be reasoned about
symbollicaly depends on the kind of reasoning to be performed.
	In spatial reasoning, for instance, the notion of "place" is
different according to whether you are reasoning about free motion
(see my MS) or motion under control of an arm (see Tomas
Lozano-Perez's PhD thesis or Rod Brooks's paper in the last AAAI
proceedings).  What must be "built in" are the facilities for
computing such representations, and probably in a programmable
way(children are not born doing these tasks well, although that could
also be due to lack of motor control).  Shimon Ullman has recently
begun constructing a theory of how people compute geometric predicates,
but I don't think he's had time to get very far yet.
	The situation with time is either more clear or less clear,
depending on your theoretical assumptions.  Theories of action (see
Allen, McDermott) or theories of dynamics (dekleer, myself) provide
the criteria for splitting time into pieces, although we all differ on
the details of what kinds of times there are (alternate futures,
intervals verses instants, etc.) and so far there are few agreements.
What is more mysterious is the substrate these representations are
built on.
	In summary, we need to be clearer about "concept of space" and
"concept of time" before arguing much about whether or not they are innate.





∂30-Jan-83  1428	DAM @ MIT-MC 	some mathematical results 
Received: from MIT-MC by SU-AI with NCP/FTP; 30 Jan 83  14:28:43 PST
Date: Sunday, 30 January 1983  17:14-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   John McCarthy <JMC @ SU-AI>
Cc:   phil-sci @ MIT-OZ
Subject: some mathematical results
In-reply-to: The message of 30 Jan 83  1240 PST from John McCarthy <JMC at SU-AI>


	Well it sure would be nice if everyone really understood
some mathematical logic and I also encourage people to do some hard
studying.  Without trying to indimidate however I would like to
give one more simple example of something not expressible in first
order logic.  Consider two binary relations R and R'.  There is
no sentence of first order logic which is true of a structure
just in case R' is the transitive closure of R.  In otherwords
the statement "R' is the transitive closure of R" is not expressible
in first order logic.  This is (I think) independent of whether the elements
of the domain are taken to be simple points, or wether they have internal
structure.

	David Mc

∂30-Jan-83  1432	MINSKY @ MIT-MC 	meta-epistemology, etc.
Received: from MIT-MC by SU-AI with NCP/FTP; 30 Jan 83  14:32:09 PST
Date: Sunday, 30 January 1983  17:21-EST
Sender: MINSKY @ MIT-OZ
From: MINSKY @ MIT-MC
To:   ISAACSON @ USC-ISI
Cc:   gavan @ MIT-OZ, JCMa @ MIT-OZ, phil-sci @ MIT-MC
Subject: meta-epistemology, etc.
In-reply-to: The message of 30 Jan 1983  16:34-EST from ISAACSON at USC-ISI


JDI: It turns out that, if all you can do is detect differences over
     strings, then, if this is indeed ALL you can do, you'd keep doing it
     indefinitely.  The mere repetition then gives you some "rhythm", or a
     primordial time-dimension.

My intuition agrees with JDI's.  If you have a machine capable of
computation and memory of some sort, and ways to discern differences
of simple kinds - and probably, also, ways to "chain" or otherwise
build structures, this should have a lot of potential.  I don't see
that any special other provisions for concepts of time and space
need be supplied.

(However, in order for a machine to bootstrap itself into doing things
that we might acsribe intellectual worthiness to, I believe there must
be a lot of other stuff about evidence-processing, e.g., hill-climbing
processes and learning machinery.  I don't think a machine is likely
to become intelligent without some initial heuristic machinery.  It
needs some way to be biased to grow in the directions that, e.g.,
emphasize better-than-random prediction methods.)

∂30-Jan-83  1441	KDF @ MIT-MC 	meta-epistemology, etc.   
Received: from MIT-MC by SU-AI with NCP/FTP; 30 Jan 83  14:41:16 PST
Date: Sunday, 30 January 1983  17:30-EST
Sender: KDF @ MIT-OZ
From: KDF @ MIT-MC
To:   ISAACSON @ USC-ISI
Cc:   gavan @ MIT-OZ, JCMa @ MIT-OZ, minsky @ MIT-OZ, phil-sci @ MIT-MC
Subject: meta-epistemology, etc.
In-reply-to: The message of 30 Jan 1983  16:34-EST from ISAACSON at USC-ISI

    Date: Sunday, 30 January 1983  16:34-EST
    From: ISAACSON at USC-ISI
    To:   JCMa
    cc:   minsky, gavan, phil-sci at MIT-MC, isaacson at USC-ISI
    Re:   meta-epistemology, etc.

    In-Reply-To: JCMa's message of Sunday, 30 Jan 1983, 14:37-EST and
    the relevant messages from MINSKY and GAVAN


    JCMa: What is the field over which the "innate" difference
    detector notes differences?  What's your method for generating
    the idea for time-sequence from those difference detectors?


    MINSKY: I do believe that such machine would need to begin with
    ways to deal with differences.  I do not see that it is
    NECESSARY, though, to begin with an "innate" sense of
    time-difference.


    I'm inclined to believe that space is, in some sense, primordial
    to time [in cognitive development].  In fact, I don't need three-
    or two-dimensional space.  I think that ONE-dimensional space is
    sufficient to start things rolling.  Sometime I like to think of
    it as STRINGLAND.  [Remember "Flatland"?]

You DO need two or three dimensions if you really want to study space.
The notion of a partial order is defined only for one dimension; for
higher dimensions a direction has to be introduced.  The existence of
partial orders simplifies qualitative reasoning (see my papers on QP
theory, particularly the one in the last AAAI proceedings), finding
ways to re-introduce enough descriptions to use them is one of the key
problems that you will elide by starting with "one dimensional spatial
representations".


∂30-Jan-83  1454	GAVAN @ MIT-MC 	scientific respectibility    
Received: from MIT-MC by SU-AI with NCP/FTP; 30 Jan 83  14:54:38 PST
Date: Sunday, 30 January 1983  17:41-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   DAM @ MIT-OZ
Cc:   Minsky @ MIT-OZ, phil-sci @ MIT-OZ
Subject: scientific respectibility
In-reply-to: The message of 30 Jan 1983  16:46-EST from DAM

    From: DAM

    	From: MINSKY

    	In my extremest moments I attribute (the notion of linguistic
    	universals) to (Chomsky's) need to feel that (linguistics) is
    	as self-contained and hence as scientifically respectable
    	as mathematics.

    	I think you are absolutely correct about this (though I would
    say physics rather than mathematics since linguistics is empirical rather
    than tautological).  I think this "need" of Chomsky's is completely
    laudible.

What is it that makes one discipline more "scientifically respectable"
than any other?  How do you define "scientific respectability" and how
is it assessed?

∂30-Jan-83  1459	GAVAN @ MIT-MC 	meta-epistemology, etc. 
Received: from MIT-MC by SU-AI with NCP/FTP; 30 Jan 83  14:59:01 PST
Date: Sunday, 30 January 1983  17:37-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   ISAACSON @ USC-ISI
Cc:   JCMa @ MIT-OZ, minsky @ MIT-OZ, phil-sci @ MIT-MC
Subject: meta-epistemology, etc.
In-reply-to: The message of 30 Jan 1983  16:34-EST from ISAACSON at USC-ISI

Yes, I remember Flatland

    From: ISAACSON

    It turns out that, if all you can do is detect differences over
    strings, then, if this is indeed ALL you can do, you'd keep doing
    it indefinitely.  The mere repetition then gives you some
    "rhythm", or a primordial time-dimension.  But, more important, I
    think, the successive strings generated from successive
    difference detection will enter a closed cycle.  {This can be
    shown rigorously}.  Once you have a whole bunch of cycles going,
    presumably implemented in your basic biological machinery, you
    have the rudiments of something similar to your physiological
    "Biological Clock".

    I think that from there on your concept of time can be reasonably
    derived from that ongoing "clockwork".

Yes.  That's what we had in mind.

I would consider this physiological clock to be an innate concept of
time, although certainly not a concept as highly differentiated as
yours, mine, or Einstein's.  Thinking about this issue, I've decided
to be open to the idea that difference detection across space may not
be innate, but rather derived metaphorically (epistemogenically, UGH)
from difference detection over time.  I think it unlikely that a fetus
would be doing any difference detection across space, but much more
likely that it would detect differences over time.  Of course, then
this metaphoric process or ability would have to be "innate."

∂30-Jan-83  1537	DAM @ MIT-MC 	a fixed mind    
Received: from MIT-MC by SU-AI with NCP/FTP; 30 Jan 83  15:29:20 PST
Date: Sunday, 30 January 1983  17:29-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   MINSKY @ MIT-OZ
cc:   phil-sci @ MIT-OZ
Subject: a fixed mind


	Date: Saturday, 29 January 1983  23:49-EST
	From: MINSKY

	All I can make of this is that there was some idea, which seems
	unrealistic to me, of "necessarily true" that he was concerned with.
	Do you think it was a sound idea, given that children use logic
	scarcely at all and adults make mistakes in it?

	Perhaps one should read "necessarily true" as "universally held
to be true over all cognitive beings on all planets".  I think this
is probably a sound idea for some interpretation of "statement"
and "held to be true".

	Finally, doesn't it all contain a mystical vision that one is born
	with a mind which persists through life unchanged, except for its
	contents?

	Certainly there is some level of brain architecture which remains
fixed through life.  Why do you find this a silly idea.
I suspect that there are sophisticated fixed mechanisms.  While there
are certainly developmental and evolutionary aspects to brain structure
why must ALL structure change in an evolutionary way during development?

	David Mc

∂30-Jan-83  1537	ISAACSON at USC-ISI 	Re:  meta-epistemology, etc. 
Received: from MIT-MC by SU-AI with NCP/FTP; 30 Jan 83  15:29:11 PST
Date: 30 Jan 1983 1334-PST
Sender: ISAACSON at USC-ISI
Subject: Re:  meta-epistemology, etc.
From: ISAACSON at USC-ISI
To: JCMa at MIT-MC
Cc: minsky at MIT-MC, gavan at MIT-MC, phil-sci at MIT-MC, isaacson at USC-ISI
Message-ID: <[USC-ISI]30-Jan-83 13:34:53.ISAACSON>


In-Reply-To: JCMa's message of Sunday, 30 Jan 1983, 14:37-EST and
the relevant messages from MINSKY and GAVAN


JCMa: What is the field over which the "innate" difference
detector notes differences?  What's your method for generating
the idea for time-sequence from those difference detectors?


MINSKY: I do believe that such machine would need to begin with
ways to deal with differences.  I do not see that it is
NECESSARY, though, to begin with an "innate" sense of
time-difference.


I'm inclined to believe that space is, in some sense, primordial
to time [in cognitive development].  In fact, I don't need three-
or two-dimensional space.  I think that ONE-dimensional space is
sufficient to start things rolling.  Sometime I like to think of
it as STRINGLAND.  [Remember "Flatland"?]


It turns out that, if all you can do is detect differences over
strings, then, if this is indeed ALL you can do, you'd keep doing
it indefinitely.  The mere repetition then gives you some
"rhythm", or a primordial time-dimension.  But, more important, I
think, the successive strings generated from successive
difference detection will enter a closed cycle.  {This can be
shown rigorously}.  Once you have a whole bunch of cycles going,
presumably implemented in your basic biological machinery, you
have the rudiments of something similar to your physiological
"Biological Clock".

I think that from there on your concept of time can be reasonably
derived from that ongoing "clockwork".


∂30-Jan-83  1555	ISAACSON at USC-ISI 	Re:  meta-epistemology  
Received: from MIT-MC by SU-AI with NCP/FTP; 30 Jan 83  15:55:36 PST
Date: 30 Jan 1983 1547-PST
Sender: ISAACSON at USC-ISI
Subject: Re:  meta-epistemology
From: ISAACSON at USC-ISI
To: KDF at MIT-MC
Cc: phil-sci at MIT-MC, isaacson at USC-ISI
Message-ID: <[USC-ISI]30-Jan-83 15:47:21.ISAACSON>


In-Reply-To: Your message of Sunday, 30 Jan 1983, 17:30-EST


KDF: You DO need two or three dimensions if you really want to
study space.


I have got actual implementations of both one-dimensional and
two-dimensional such difference-detecting machines.  The
two-dimensional IS the richer and more interesting one.  But, so
it happens, it is essentially ASSEMBLED from one-dimensional such
machine-components.

Naturally, two and three dimensional studies of space are crucial
[and are in the cards...]

p.s.  perhaps you can mail me reprints of your papers that you
cited.  Thanks.


p.p.s.  I intend to respond to others that addressed themselves to
this point but I want to mull it over some more.  I also want to
watch the ball game on TV.


∂30-Jan-83  1632	John C. Mallery <JCMa at MIT-OZ at MIT-MC> 	Tarskian Semantics   
Received: from MIT-MC by SU-AI with NCP/FTP; 30 Jan 83  16:32:42 PST
Date: Sunday, 30 January 1983, 19:24-EST
From: John C. Mallery <JCMa at MIT-OZ at MIT-MC>
Subject: Tarskian Semantics
To: DAM at MIT-MC
Cc: phil-sci at MIT-OZ at MIT-MC
In-reply-to: The message of 30 Jan 83 14:13-EST from DAM at MIT-MC

    From: DAM @ MIT-MC
    Subject: Tarskian Semantics

	    I think this notion is actually quite simple if you are
    willing to ignore some technical details.  Consider the first order
    predicate calculus (the details of the language are irrelevent here I
    simply want to talk about some concrete formal language).  A model for
    the first order predicate calculus is a first order structure (it is
    fairly important to know what a first order structure is).  For
    example the natural numbers are a first order structure.  In general a
    first order structure is domain (a universe of discurse such as the
    numbers) and a designated set of functions and predicates over that
    universe of discourse.
	    A given sentence of first order predicate calculus is either
    true or false OF A PARTICULAR FIRST ORDER STRUCTURE.  For example
    there is a formal sentence which "says" "for all x there is a y such
    that y is greater than x".  This sentence will be true in some
    structures and false in others.  It will be true in several different
    structures (this particular sentence is true of both the natural
    numbers and the integers).

What about the case in which the predicate, while applying to some first
order structure, is undefined over another first order structure?  Does
the predicate know this?  Does the structure know this?  If this class
of issues is not handled, I can see no hope for the kinds of operations
necessary for effective manipulation of contexts, precisely the kinds of
things Marvin wishes to do in his learning meaning paper.  Moreover, if
there is no way to perform restricted computations (e.g., Ken Hasses
Cauldrons), the logic is plagued by problems with combinatorial
explosion.  Well?

p.s. I guess this is really a question about how far you can get with
modal logic.  Maybe it is a question about 3-valued, or n-valued logics.

∂30-Jan-83  1638	John C. Mallery <JCMa at MIT-OZ at MIT-MC> 	Tarskian Semantics   
Received: from MIT-MC by SU-AI with NCP/FTP; 30 Jan 83  16:37:44 PST
Date: Sunday, 30 January 1983, 19:32-EST
From: John C. Mallery <JCMa at MIT-OZ at MIT-MC>
Subject: Tarskian Semantics
To: DAM at MIT-MC
Cc: phil-sci at MIT-OZ at MIT-MC
In-reply-to: The message of 30 Jan 83 15:07-EST from DAM at MIT-MC

    Mail-From: DAM created at 30-Jan-83 15:07:48
    From: DAM @ MIT-MC
    Subject: Tarskian Semantics


    What is important is that that there is SOME precisely defined
    function which takes a sentence and a "world" and gives "true" or
    "false".

What does it mean for something to be "true" or "false?"  Why should we
care? Why should it be important?

What if the function returns neither of these?  Suppose it returns
"unknown", or "unlikely-to-be-true", or "likely-to-be-true",or "maybe",
or "maybe -- run the foo process to find out." What does Tarskian
Semantics do for you then?  

∂30-Jan-83  1654	GAVAN @ MIT-MC 	innateness, sentences, etc.  
Received: from MIT-MC by SU-AI with NCP/FTP; 30 Jan 83  16:54:38 PST
Date: Sunday, 30 January 1983  19:35-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   John McCarthy <JMC @ SU-AI>
Cc:   phil-sci @ MIT-OZ
Subject: innateness, sentences, etc.  
In-reply-to: The message of 29 Jan 83  1627 PST from John McCarthy <JMC at SU-AI>

    From: John McCarthy <JMC at SU-AI>

    . . .

A better criticism of the Creole comment would have been to point out
that Mandarin Chinese has no verb tenses at all.  My point was only
that, among its other attributes, language can be used as an effective
mechanism for social control.  Remember Newspeak?

    	Apart from that it would be a mistake for GAVAN to suppose that we
    advocates of the correspondence theories consider ourselves defeated.  We
    merely have trotted out all the arguments we care to and don't want to
    repeat ourselves.  Therefore, it is pointless for him to repeat flat
    assertions he has made already about the meaningless of
    correspondence statements.

I don't pretend that I am endowed with the power to dissuade anyone of
his/her religious convictions.  I have only pointed out that the
correspondence theory of truth has the character of religious dogma.

I have argued, and I think there is general agreement on this point
(although you may wish to disagree), that anyone's view of reality is
necessarily subjective.  If you assert that there's some
correspondence between some theory (or some statement within a theory)
and something in the world, you're not asserting any kind of
correspondence between two separable things at all.  You're positing a
correspondence between something that's in your head (your subjective
understanding of the theory) and something else that's in your head
(your subjective understanding of reality).  In point of fact, when
you utter a statement about some theory you believe, you're uttering a
natural language summary of some aspect of your subjective
understanding of reality.  So, in a sense, you could say that the
statement either corresponds to or doesn't correspond to that
subjective picture of reality in your head.  But that doesn't
constitute a theory of truth.  The sentence could indeed correspond to
the subjective image of reality while, at the same time, the
subjective image is false.  We think that our subjective image of some
aspect of reality is true if and only if it coheres with our other
images of reality -- if and only if it coheres with our other beliefs.
That is the coherence theory of truth.

The "proof" of the coherence theory lies in the fact that we do not
have independent access to the real world.  We only know what is
mediated by our beliefs.

We test the coherence of our ideas not only by examining their
interconnections with our other beliefs and by conducting empirical
experiments, but also by entering into discourse in a linguistic
community, where our utterred summaries (theories) of subjective
reality receive (or are denied) consensual validation.  This is the
consensus theory of truth.

It may be "true", as you say, that you "true" believers on the
correspondence theory have trotted out all the arguments you care to
and don't want to repeat yourselves, but you have confined yourself to
defending the correspondence theory.  I have yet to read on this list
a critique of either the coherence theory or the consensus theory.
Why is this?

I have also said that I might be prepared to accept the correspondence
theory if someone could explicitly and unambiguously demarcate the
border between opinion and fact.  No one has bothered to try.  Why is
this?

∂30-Jan-83  2112	MINSKY @ MIT-MC
Received: from MIT-ML by SU-AI with NCP/FTP; 30 Jan 83  21:12:37 PST
Date: Sunday, 30 January 1983  23:59-EST
Sender: MINSKY @ MIT-OZ
From: MINSKY @ MIT-MC
To:   John McCarthy <JMC @ SU-AI>
Cc:   dam @ MIT-OZ, phil-sci @ MIT-OZ
In-reply-to: The message of 30 Jan 83  1233 PST from John McCarthy <JMC at SU-AI>


JMC:   Reading past philosophers, and probably even present day artificial
     intelligence researchers, will not be necessary in order to
     understand how the programs work, although it will be needed to
     understand the history of the subject.  There is an enormous
     amount of somewhat relevant past philosophy, but it is probably a
     better strategy to concentrate on recent work and above all, to
     think directly about the problems rather than about winning
     debates on behalf of one's already held beliefs.


This is what I was trying to say.  I assumed that the readers of
PHIL-SCI had the principal motive of standing how minds work,
rather than of understanding the evolutionary details of the
 ideas that got us into our present state.

∂30-Jan-83  2130	MINSKY @ MIT-MC 	innateness, sentences, etc. 
Received: from MIT-MC by SU-AI with NCP/FTP; 30 Jan 83  21:30:22 PST
Date: Monday, 31 January 1983  00:14-EST
Sender: MINSKY @ MIT-OZ
From: MINSKY @ MIT-MC
To:   DAM @ MIT-OZ
Cc:   JCM @ SU-AI, phil-sci @ MIT-OZ
Subject: innateness, sentences, etc.
In-reply-to: The message of 30 Jan 1983  16:41-EST from DAM


DAM:	Does the sparseness theory explain Chomsky's X-bar linguistic
        universals and the tautological mathematical truths?  I don't
        know but I suspect not.


I don't see why not.  I bet if you spent some time on the tautological
truths you might be able to show something like that we search through
theories and reject ones that are locally inconsistent (because they
don't output suitable predictions, Presumably we end up being locally
very consistent because we use related procedures to generate, say,
arithmetical examples and to generate arithmetical propositions.

As for Chomsky's X-bar universals, if you tell me what they are, I
will volunteer to propose a computational complexity reason - that is,
a cognitive convenience reason - why people will do things that way -
I have never found any such difficulty for the other alleged
universals of this sort.

On second thought, DAM, I think it would be a good exercise for you to
do that: see if you find it difficult to find a "non-linguistic"
reason.  Most alleged "universals" seem to be things like "no culture
posesses a preposition that can extract the CADR of the previous
phrase".  Well, I have the impressins that maybe ALL the universals
are consequences of not being able to keep track of more than a few
fragments of such things.


(That was what I found mildly enraging about your assuming that
Chomsky's singing Berwick's thesis erases Berwick's debt to AI ideas -
since he was my student first and understood this idea very well.  T
An early form of the idea stems back to the "depth hypothesis" of
Yngve, a pioneer computational linguist who was at MIT here in the
late '50's and early '60's.  The Marcus Grammar was the first serious
approach to build a depth-limited grammar, and Berwick first applied
simple learning ideas to such things.)

∂30-Jan-83  2156	GAVAN @ MIT-MC 	counter-productive tactics   
Received: from MIT-ML by SU-AI with NCP/FTP; 30 Jan 83  21:56:15 PST
Date: Monday, 31 January 1983  00:46-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   MINSKY @ MIT-OZ
Cc:   dam @ MIT-OZ, John McCarthy <JMC @ SU-AI>, phil-sci @ MIT-OZ
Subject: counter-productive tactics
In-reply-to: The message of 30 Jan 1983  23:59-EST from MINSKY

    From: MINSKY

    JMC:   Reading past philosophers, and probably even present day artificial
         intelligence researchers, will not be necessary in order to
         understand how the programs work, although it will be needed to
         understand the history of the subject.  There is an enormous
         amount of somewhat relevant past philosophy, but it is probably a
         better strategy to concentrate on recent work and above all, to
         think directly about the problems rather than about winning
         debates on behalf of one's already held beliefs.


    This is what I was trying to say.  I assumed that the readers of
    PHIL-SCI had the principal motive of standing how minds work,
    rather than of understanding the evolutionary details of the
     ideas that got us into our present state.

You're right, Marvin.  And that's the point of the debates.  It's
important to see how bad ideas and religious dogma (like the
correspondence theory of truth and the metaphysical dualism that comes
with it) can get people wedged.  But it's also true that too much
emphasis on winning debates can get in the way.  But then, so can ad
hominem arguments from people who are said to be important figures in
the field -- arguments like "muddled" and "scientifically
unpromising."  All kinds of tactics are counter-productive.

∂30-Jan-83  2205	GAVAN @ MIT-MC 	some mathematical results    
Received: from MIT-ML by SU-AI with NCP/FTP; 30 Jan 83  22:05:09 PST
Date: Monday, 31 January 1983  00:57-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   DAM @ MIT-OZ
Cc:   John McCarthy <JMC @ SU-AI>, phil-sci @ MIT-OZ
Subject: some mathematical results
In-reply-to: The message of 30 Jan 1983  17:14-EST from DAM

    From: DAM

    	Well it sure would be nice if everyone really understood
    some mathematical logic and I also encourage people to do some hard
    studying.  

Most people on this list DO understand SOME mathematical logic.  It would
NOT be nice if EVERYONE did because then the world would be a boring place
in which to live.

∂30-Jan-83  2206	John C. Mallery <JCMa at MIT-OZ> 	Hallelujah: Saved from Chomskian Depravity    
Received: from MIT-MC by SU-AI with NCP/FTP; 30 Jan 83  22:06:09 PST
Date: Monday, 31 January 1983, 00:20-EST
From: John C. Mallery <JCMa at MIT-OZ>
Subject: Hallelujah: Saved from Chomskian Depravity 
To: MINSKY at MIT-MC
Cc: phil-sci at MIT-MC
In-reply-to: The message of 30 Jan 83 23:37-EST from MINSKY at MIT-MC

    Mail-From: MINSKY created at 30-Jan-83 23:37:47
    Subject: Kant: no dummy
    In-reply-to: The message of 30 Jan 1983 11:00-EST from John Batali <Batali>

    I have saved a few brilliant young people from Chomskyism in my day
    - and, I boldly assert, with a consequent large effect on recent
    history of linguistics.  

How about some enumeration?

    I would like to influence some of you to see how shabby is our
    heritage in the domain of ideas about learning and meaning, compared
    to the power that we could get from new philosophical analyses using
    the great revelations that have recently dawned from computation.

Care to mention any specific thinkers on your "hit list?"  

Which great new revelations should we use?

p.s. There is an extra charge to shoot down people who are correct!


∂30-Jan-83  2249	John McCarthy <JMC@SU-AI> 	innateness        
Received: from MIT-MC by SU-AI with NCP/FTP; 30 Jan 83  22:49:00 PST
Date: 30 Jan 83  1657 PST
From: John McCarthy <JMC@SU-AI>
Subject: innateness    
To:   dam@MIT-OZ, minsky@MIT-OZ
CC:   phil-sci@MIT-OZ    

I don't see a need to make each science self-contained as laudable.  I
just was chairman of an economics PhD oral in which the candidate and
the professors hypothesized about what information corporations
find it optimal to give customers, ignoring the fact that there is a
huge (business school) literature on marketing covering precisely this
point.  Economists often mistakenly treat technology as a capital good 
which a firm buys a certain quantity of - ignoring any specific characteristics
of specific inventions and processes.

	In the present case, it isn't laudable for linguists to ignore
the relation between intelligence and problem solving on the one hand
and language on the other.


∂31-Jan-83  0019	ISAACSON at USC-ISI 	Re:  meta-epistemology, etc. 
Received: from MIT-MC by SU-AI with NCP/FTP; 31 Jan 83  00:09:02 PST
Date: 30 Jan 1983 2217-PST
Sender: ISAACSON at USC-ISI
Subject: Re:  meta-epistemology, etc.
From: ISAACSON at USC-ISI
To: GAVAN at MIT-MC
Cc: phil-sci at MIT-MC, isaacson at USC-ISI
Message-ID: <[USC-ISI]30-Jan-83 22:17:16.ISAACSON>

In-Reply-To: Your message of Sunday, 30 Jan 1983, 17:37-EST


GAVAN: Yes.  That's what we had in mind.


Good!

(For a moment there I thought you flatly said: "That's what we
ha[ve] in [the] mind."]


GAVAN: I think it unlikely that a fetus would be doing any
difference detection across space, but much more likely that it
would detect differences over time.


My introspection relating to events from my pre-natal days is
rather faint...  {I was present during the births of our two
daughters, but, alas, forgot to ask.  Now they tell me they don't
remember!}

I would think, though, that fetuses may detect tactile signals
over their entire bodies (i.e., fluid pressure, umbilical cord,
etc.), some temperature fluctuations or differential, noises, and
other things of these sorts.  One could argue, of course, that
noises happen in "time" and so on, but I am inclined to view
these (at least tentatively) as being more in the nature of
"spatial" inputs rather than "temporal".


∂31-Jan-83  0019	GAVAN @ MIT-MC 	Kant: no dummy
Received: from MIT-MC by SU-AI with NCP/FTP; 30 Jan 83  23:49:58 PST
Date: Monday, 31 January 1983  00:37-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   MINSKY @ MIT-OZ
Cc:   John Batali <Batali @ MIT-OZ>, JCMa @ MIT-OZ, phil-sci @ MIT-MC
Subject: Kant: no dummy
In-reply-to: The message of 30 Jan 1983  23:37-EST from MINSKY

    From: MINSKY

    BATALI: Yes.  The mind constructs itself over time.  What does it need
         to know and know how to do in order to do this?  At any point in
         time  how can it use what it knows to continue the process?

    Yes.  And the point was that, from all I've heard and some I've read,
    I conclude that Kant did not have what I would consider good ideas by
    modern standards - e.g., since Freud, Piaget, and .

But both Freud and Piaget had the ideas they did have because, at
least partially, they read and understood the issues.  This they did
by reading and understanding people like Spinoza, Leibniz, Kant, and
especially Hegel and Aristotle.

    BATALI: It may be that the only thing that seperated Plato, or Hume,
         or Leibniz, or Kant, or Hegel, or Husserl from "success" was the lack
         of an understanding of computation.  They certainly understood a lot
         else.

    Well, in your reply I think you illustrated what I ewas asserting, by
    being forced to stray toward approving of what such people said in
    their time.  

It remains to be seen whether the understanding of computation will
bring about advances in the understanding of understanding.  I
personally believe it will, but it still remains to be seen.  Leibniz
knew more about computation than you might imagine.  His attempt to
build a computer was the first.  Hegel understood, more even than most
academics today, the limits of formal (or "groundless") logic.

    But I'm not saying that thery weren't very smart in some
    sense - only that, like Chomsky, they found and led others along paths
    that were not so productive.  

People like Freud, Piaget, Peirce, McCulloch, and Minsky.

    I don't even agree that they did the
    best they could for their time.  This is a counterfactual that we can
    never be sure about but, for example, I don't see why we had to wair
    for Piaget and his non-conservation experiments to how far from
    innate, a priori - or whatever you want to call it - is the basis of
    number in human thought.

Did Kant say that number was pure and a priori?  I think not.  He said
space and time were.  The concept of number requires the concept of
space (maybe time) since you have to recognize something in order to
count it.  No, I would consider Kant to be the source for the a
posteriori nature of number.  He said that only space and time were
pure and a priori.

    . . .

    And one can admire students who delve deeply into the past, yet still
    regret that they searched so hard for clues in the old manuscripts
    that they could not see the riches in the fresh air of the present.

Clearly, the best strategy is to consult both the old and the new.
You seem to think that most people who read the old philosophers get
wedged in the musty texts.  I doubt it.  Personally, I take all the
philosophers I read with more than just a grain of salt, including
Marvin Minsky.

∂31-Jan-83  0019	MINSKY @ MIT-MC 	Kant: no dummy    
Received: from MIT-MC by SU-AI with NCP/FTP; 30 Jan 83  23:42:27 PST
Date: Sunday, 30 January 1983  23:37-EST
Sender: MINSKY @ MIT-OZ
From: MINSKY @ MIT-MC
To:   John Batali <Batali @ MIT-OZ>
Cc:   GAVAN @ MIT-OZ, JCMa @ MIT-OZ, phil-sci @ MIT-MC
Subject: Kant: no dummy
In-reply-to: The message of 30 Jan 1983 11:00-EST from John Batali <Batali>


BATALI: Yes.  The mind constructs itself over time.  What does it need
     to know and know how to do in order to do this?  At any point in
     time  how can it use what it knows to continue the process?

Yes.  And the point was that, from all I've heard and some I've read,
I conclude that Kant did not have what I would consider good ideas by
modern standards - e.g., since Freud, Piaget, and .

BATALI: It may be that the only thing that seperated Plato, or Hume,
     or Leibniz, or Kant, or Hegel, or Husserl from "success" was the lack
     of an understanding of computation.  They certainly understood a lot
     else.

Well, in your reply I think you illustrated what I ewas asserting, by
being forced to stray toward approving of what such people said in
their time.  But I'm not saying that thery weren't very smart in some
sense - only that, like Chomsky, they found and led others along paths
that were not so productive.  I don't even agree that they did the
best they could for their time.  This is a counterfactual that we can
never be sure about but, for example, I don't see why we had to wair
for Piaget and his non-conservation experiments to how far from
innate, a priori - or whatever you want to call it - is the basis of
number in human thought.


I want to add that I do not want to discourage students from studying
their intellectual ancestors.  But, as GAVAN hinted, it is probably
important psychologically to have enough courage to formulate new
directions that clash with those even of hundreds of years.  I have
saved a few brilliant young people from Chomskyism in my day - and, I
boldly assert, with a consequent large effect on recent history of
linguistics.  I would like to influence some of you to see how shabby
is our heritage in the domain of ideas about learning and meaning,
compared to the power that we could get from new philosophical
analyses using the great revelations that have recently dawned from
computation.  

There is room still to admire Galileo, if we see that it was
Newton-Leibniz new methods that founded modern mechanics.  It was too
bad that Galileo did not see the need for TWO invariants, Momentum and
Energy, instead of one.  There will always be room to admire Kant, but
still to regret that he could not see how ideas could emerge from
mechanism without having been there in the first place.

And one can admire students who delve deeply into the past, yet still
regret that they searched so hard for clues in the old manuscripts
that they could not see the riches in the fresh air of the present.

∂31-Jan-83  0019	ISAACSON at USC-ISI 	Re:  meta epitemology, etc.  
Received: from MIT-ML by SU-AI with NCP/FTP; 30 Jan 83  23:59:05 PST
Date: 30 Jan 1983 2249-PST
Sender: ISAACSON at USC-ISI
Subject: Re:  meta epitemology, etc.
From: ISAACSON at USC-ISI
To: MINSKY at MIT-MC
Cc: phil-sci at MIT-MC, isaacson at USC-ISI
Message-ID: <[USC-ISI]30-Jan-83 22:49:08.ISAACSON>

In-Reply-To: Your message of Sunday, 30 Jan 1983, 17:21-EST


MINSKY: My intuition agrees with JDI's.  If you have a machine
capable of computation and memory of some sort, and ways to
discern differences of simple kinds - and probably, also, ways to
"chain" or otherwise build structures, this should have a lot of
potential.

Fine.  Some of you may know that I've got a specimen of this
sort.  At such a primitive level it does, on its own accord,
plenty more.  I'm itching to discuss its properties, if some of
you are good and ready.


MINSKY: I don't see any special other provisions for concepts of
time and space need be supplied.

I'm inclined to agree here.


As to your parenthetical comment, I generally agree that beyond,
the *minimal* specification above, we have to start worrying
about higher-level processing of the kinds you mention, and then
some.


∂31-Jan-83  0100	John C. Mallery <JCMa at MIT-OZ> 	innateness, sentences, etc.    
Received: from MIT-MC by SU-AI with NCP/FTP; 31 Jan 83  01:00:35 PST
Date: Sunday, 30 January 1983, 19:47-EST
From: John C. Mallery <JCMa at MIT-OZ>
Subject: innateness, sentences, etc.
To: MINSKY at mc
Cc: phil-sci at mc
In-reply-to: The message of 30 Jan 83 17:10-EST from MINSKY at MIT-MC

    Mail-From: MINSKY created at 30-Jan-83 17:10:57
    From: MINSKY @ MIT-MC
    Subject: innateness, sentences, etc.
    In-reply-to: The message of 30 Jan 1983  16:46-EST from DAM


    The desire to do something that respectable is laudable.  The desire
    to do it by declaring that linguistics is ("defined") to be
    completable within the domain of sentences was in my view a bad
    judgment, but was worth trying.

I disagree.  It was lossage from the start, and probably resulted from
the impoverished ontology characteristic of vulgar materialism.

    The substance of my grumble, then, is the dogged and unreasonable
    persistence in that view after the middle 1960's which, in my view,
    retarded the development of a generation of linguistic students. 

Amen. In fact, it seems that European linguistics faired much better
than otherwise it might have, precisely because it was not so dominated
by the Chomskian paradigm [It had de Saussure].  [Note that it has
fields like text-linguisitics, and worry about pragmatics a la Peirce].
Of course, this is just another example of the diachotomy between
Analytic Philosophy and Continental Philosophy.  The curious thing is
that vulgar materialism could play such a large role in Analytic
Philosophy of Mind! [Maybe its the behavioralists propensities of both?]

∂31-Jan-83  0101	John McCarthy <JMC@SU-AI> 	There you go again, Gavan.       
Received: from MIT-MC by SU-AI with NCP/FTP; 31 Jan 83  01:00:52 PST
Date: 30 Jan 83  2216 PST
From: John McCarthy <JMC@SU-AI>
Subject: There you go again, Gavan.   
To:   gavan@MIT-OZ
CC:   phil-sci@MIT-OZ  

A string search indicates that your latest message is the third in which
you have referred to the correspondence theory as a religious
dogma.  Perhaps two more tries and we'll all admit it.  I'll
have more to say in favor of the correspondence theory when I
have something new to say about it.


∂31-Jan-83  0101	MINSKY @ MIT-MC 	innateness, sentences, etc.      
Received: from MIT-MC by SU-AI with NCP/FTP; 31 Jan 83  01:00:43 PST
Date: Sunday, 30 January 1983  23:53-EST
Sender: MINSKY @ MIT-OZ
From: MINSKY @ MIT-MC
To:   John C. Mallery <JCMa @ MIT-OZ>
Cc:   phil-sci @ mc
Subject: innateness, sentences, etc.  
In-reply-to: The message of 30 Jan 1983 15:27-EST from John C. Mallery <JCMa>


JCMA: Aren't Fodor's arguments about mentalese just an effort to prop
     up the Chomskian position [Or gain credibility through linkage to
     an established tradition]?  If this is the case, it sure seems
     that the analytical position on philosophy of mind which derives from
     Fodor is hopelessly lost.  Comments?

Well, that's what I am inclined to believe all right, but haven't had
the courage to say.  Principally because (i) I have unable to
understand his arguments as explained to me by students or (ii) in
discussing things with him or even (iii) when Dennett tried to explain
them to me.  I have assumed that either his arguments are based on
many assumptions that I can't understand (to say nothing of accept) or
that - since people say he is smart - that our ways of thinking are
too different to communicate.   

Now when a person can't understand another person, there are the
options of working hard ton understand, or deciding there are better
things to do.  In Chomsky's case, it seemed pretty clear to me that
the Chomsky followers were not discovering as much about language in
general as were either the AI people, e.g., Schank and conceptual
dependency lines, the Semantic Network people, or the Text-Linguistic
movements.  So in that case I decided the approach was unpromising.
They' did discover a fair amount about Grammar, but the "universals"
that are exhibited were - in simple cases - explained in simple
cognitive ways, in intermediate cases, by simple limitations of the
parsing computer's abilities.  Finally, in complicated cases of
"universals" the examples seem so peculiar and shaky that they prove
the rule by exception - that is they have gone past the bottom of that
barrel and there seems to be almost none of "universal grammar" after
all.  Still, there was a substantial intellectual and mathematical
content worth serious study - especially in the early
"transformational" theory, which unfortunately did not survive careful
criticism.

In the case Fodorism, I don't know of any important problems that his
methods are alleged to have solved.  So I have not seen any incentive
to give the matter serious attention.  Also, I am appalled at the
vagueness of reports from students who take courses in this and say
how "interesting" it was - yet seem unable to tell me some "good idea"
that they got from it.  Of course, one can say that the two wrolds are
so different that they can't find a way to explain.

∂31-Jan-83  0301	GAVAN @ MIT-MC 	There you don't go again, JMC.    
Received: from MIT-MC by SU-AI with NCP/FTP; 31 Jan 83  03:01:16 PST
Date: Monday, 31 January 1983  05:58-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   John McCarthy <JMC @ SU-AI>
Cc:   phil-sci @ MIT-OZ
Subject: There you don't go again, JMC.   
In-reply-to: The message of 30 Jan 83  2216 PST from John McCarthy <JMC at SU-AI>

    From: John McCarthy <JMC at SU-AI>

    A string search indicates that your latest message is the third in which
    you have referred to the correspondence theory as a religious
    dogma.  Perhaps two more tries and we'll all admit it.  I'll
    have more to say in favor of the correspondence theory when I
    have something new to say about it.

I don't expect you to defend the correspondence theory.  Instead of
defending IT why not critique the alternative, the coherence theory?
Why not critique the consensus theory?  This is the one you said was
"muddled".  Why do you think so?

You have already defended the correspondence theory "to the max", as
they say in California.  Yet your denials of both the consensus and
coherence theories have not been reasoned critiques of them, but
rather ad hominem attacks against the person presenting them.  If you
don't like the message, criticize IT -- not the messenger.

∂31-Jan-83  0309	GAVAN @ MIT-MC 	meta-epistemology, etc. 
Received: from MIT-MC by SU-AI with NCP/FTP; 31 Jan 83  03:09:19 PST
Date: Monday, 31 January 1983  06:00-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   ISAACSON @ USC-ISI
Cc:   phil-sci @ MIT-MC
Subject: meta-epistemology, etc.
In-reply-to: The message of 31 Jan 1983  01:17-EST from ISAACSON at USC-ISI


    From: ISAACSON at USC-ISI

    I would think, though, that fetuses may detect tactile signals
    over their entire bodies (i.e., fluid pressure, umbilical cord,
    etc.), some temperature fluctuations or differential, noises, and
    other things of these sorts.  One could argue, of course, that
    noises happen in "time" and so on, but I am inclined to view
    these (at least tentatively) as being more in the nature of
    "spatial" inputs rather than "temporal".

Why?

∂31-Jan-83  0454	ISAACSON at USC-ISI 	Pre-natal meta-epistemology  
Received: from MIT-MC by SU-AI with NCP/FTP; 31 Jan 83  04:54:08 PST
Date: 31 Jan 1983 0449-PST
Sender: ISAACSON at USC-ISI
Subject: Pre-natal meta-epistemology
From: ISAACSON at USC-ISI
To: GAVAN at MIT-MC
Cc: phil-sci at MIT-MC, isaacson at USC-ISI
Message-ID: <[USC-ISI]31-Jan-83 04:49:10.ISAACSON>

In-Reply-To: Your message of Monday, 31 Jan 1983, 06:00-EST


GAVAN: Why?



Because I think that noise, activating the hearing system, through
the use of both ears, may convey the first stereophonic sensation
of SPACE.

BTW, I thought of another possible, and very interesting, I
think, source of rhythm and time-related accommodation of the
mind (via the ears): The Mother's heartbeat.


p.s.  I concede to having an intuitive predilection that way,
which may be a weakness, strictly speaking.  [see also my other
message to phaneron on such matters.]


∂31-Jan-83  0819	BATALI @ MIT-MC 	There you don't go again, JMC.   
Received: from MIT-MC by SU-AI with NCP/FTP; 31 Jan 83  08:19:23 PST
Date: Monday, 31 January 1983  11:09-EST
Sender: BATALI @ MIT-OZ
From: BATALI @ MIT-MC
To:   GAVAN @ MIT-OZ
Cc:   John McCarthy <JMC @ SU-AI>, phil-sci @ MIT-OZ
Subject: There you don't go again, JMC.   
In-reply-to: The message of 31 Jan 1983  05:58-EST from GAVAN

    From: GAVAN

    You have already defended the correspondence theory "to the max", as
    they say in California.  Yet your denials of both the consensus and
    coherence theories have not been reasoned critiques of them, but
    rather ad hominem attacks against the person presenting them.  If you
    don't like the message, criticize IT -- not the messenger.

As one on the correspondence side, let me say that I am not against
the coherence of the coherence view and I consent to consensus.  I
won't criticise these views because they are right.  I won't. I won't. I
won't. And you can't make me.

Rather than truth, how about them hogs, eh?

∂31-Jan-83  0934	BATALI @ MIT-MC 	Something Changes 
Received: from MIT-MC by SU-AI with NCP/FTP; 31 Jan 83  09:34:36 PST
Date: Monday, 31 January 1983  12:11-EST
Sender: BATALI @ MIT-OZ
From: BATALI @ MIT-MC
To:   phil-sci @ MIT-OZ
Subject: Something Changes

Perhaps the problem with viewing sentences as innate is that
"sentence" is a grammatical or syntactic construct, and as such, is
relatively arbitrary.  Thus the claim that sentences are innate is
either vaccuous -- because all it says is that we use something like
WFFs -- or it is overconstraining if we take any particular form of
sentence as definitional.

Suppose we take as innate, not a syntactic characterization of
communication, such as sentence, but a SEMANTIC chracterization.  That
is: we take as primitive WHAT is said, rather than how it is said.

To do this, we need a theory of communication that tells us what
communication is for.  Coming from the recent discussion of the
innateness of the idea of time, let us imagine that communication is
to inform the mind of change.  So the schematic of communication is
expressed by the sentence "something changes".  Notice that this is a
semantic notion, syntatctic details are, as yet, irrevelant.  What has
to be done to make a complete communication out of the schematic
communication is the filling in of what is changing and what sort of
change it is.  A mind would know that a complete communication has
occured when it can fill out the schematic communication with those
particulars.  It just so happens that in modern languages, the
syntactically characterized "sentence" is the smallest grammatical
unit that can thus fill out the schematic communication.  Thus the
claim that a "sentence expresses a complete thought."  But complete
thoughts can be expressed by incomplete sentences, by grunts and
moans. "Something changes" is just one of what may be a fair-sized set
of semantically characterized schematic communications.

On this view, what must be innate is the IDEAS of change, and
communication, and time and process and action.  Roughly, the set of
concepts that a program would have to have to understand other
programs, and itself.  Claim: we should view minds not just AS
programs, but as programs that can understand how to use programs.
The above story about communication is just one application of this.

∂31-Jan-83  1118	ISAACSON at USC-ISI 	Re:  Something changes  
Received: from MIT-MC by SU-AI with NCP/FTP; 31 Jan 83  11:18:04 PST
Date: 31 Jan 1983 1108-PST
Sender: ISAACSON at USC-ISI
Subject: Re:  Something changes
From: ISAACSON at USC-ISI
To: BATALI at MIT-MC
Cc: phil-sci at MIT-MC, isaacson at USC-ISI
Message-ID: <[USC-ISI]31-Jan-83 11:08:08.ISAACSON>


In-Reply-To: Your message of Monday, 31 Jan 1983, 12:11-EST


Getting used to talking about pre-natal stuff, I think that your
message is pregnant with some interesting ideas.  Please allow
them to develop.

In particular, I like your central concept of "Something
Changes".


-- JDI


∂31-Jan-83  1144	DAM @ MIT-MC 	innateness 
Received: from MIT-MC by SU-AI with NCP/FTP; 31 Jan 83  11:44:06 PST
Date: Monday, 31 January 1983  14:37-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   McCarthy @ MIT-OZ
cc:   phil-sci @ MIT-OZ
Subject: innateness


	Date: 30 Jan 83  1657 PST
	From: John McCarthy <JMC at SU-AI>

	I don't see a need to make each science self-contained as laudable.
	Economists often mistakenly treat technology as a capital good which a
	firm buys a certain quantity of - ignoring any specific
	characteristics of specific inventions and processes.

	In the present case, it isn't laudable for linguists to ignore
	the relation between intelligence and problem solving on the one hand
	and language on the other.

	I agree that there are cases where precise and self contained
theories are not as useful as intuative judgements about the nature of
things.  However development of such a theory is never
a bad thing as long as one remembers that sometimes the precise theories
are innacurate, or ignore interesting phenomenon.  Modern linguistics
seems to be very successful and I do not think that anyone studying
language can afford to ignore Chomskian linguistics.  Would you
really prefer that Chomsky's theories had never been proposed?  Would
an economist prefer that there were no quantitative models?

	David Mc
∂31-Jan-83  1333	LEVITT @ MIT-MC 	Languages, tenses 
Received: from MIT-MC by SU-AI with NCP/FTP; 31 Jan 83  13:33:03 PST
Date: Monday, 31 January 1983  16:20-EST
Sender: LEVITT @ MIT-OZ
From: LEVITT @ MIT-MC
To:   John C. Mallery <JCMa @ MIT-OZ>
Cc:   John McCarthy @ su-ai, phil-sci @ mc
Subject: Languages, tenses
In-reply-to: The message of 30 Jan 1983 15:01-EST from John C. Mallery <JCMa>

    From: John C. Mallery <JCMa>

    Creole languages in this case refers to languages which evloved from
    pigeon french in the Caribbean.  The point is that there are no tenses
    for the past.  This is in sharp contrast to continental french!!
    Past-tense information must be conveyed through non-syntactic
    mechanisms.  ...
    One interesting point here is how semantics can make up for
    underdeveloped syntax.  Some linguists view the degree of
    sophisitication of a language as the degree to which the language
    "compiles" syntactically decidable information into its syntax, rather
    than forcing the speaker to work harder, decoded it semantically.  Those
    same linguist view french as one of the most sophisiticated languages.

This seems to be one of the more plausible manifestations of the
homily "language limits thought".  It seems inevitable that the richer
vocabulary of tenses a language has -- especially subjunctive and
perfect tenses -- the more tractable it will be to describe complex
plans, with concurrencies and contingencies, e.g. build a machine.
Without this fluency it must be very hard to describe such a plan to
someone else if help is needed.  Could hairy syntax have been the key
steppingstone that let Europe build its technology and dominate the
world?

Could hairy tenses also make it easier to implement a personal plan,
to remember and describe such a plan to oneself?

∂31-Jan-83  1525	DAM @ MIT-MC 	innateness, sentences
Received: from MIT-MC by SU-AI with NCP/FTP; 31 Jan 83  15:24:53 PST
Date: Monday, 31 January 1983  18:17-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   MINSKY @ MIT-OZ
cc:   phil-sci @ MIT-OZ
Subject: innateness, sentences


	Date: Monday, 31 January 1983  00:14-EST
	From: MINSKY


	I was mildly enraged by your assuming that Chomsky's singing
	Berwick's thesis erases Berwick's debt to AI ideas

	I did not mean to imply that Berwick has no debt to AI.  I
mearly meant to point out that Berwick ALSO has a GREAT debt to
Chomskian linguistics.

	As for Chomsky's X-bar universals, if you tell me what they
	are, I will volunteer to propose a computational complexity
	reason.

	On second thought, DAM, I think it would be a good exercise
	for you to see if you find it difficult to find a "non-linguistic"
	(explanation for linguistic universals).

	I must admit that I do not know Chomsky's X-bar universals and
in any case I am not the one who should comment on this (Berwick told
me that he is composing a response).  But I want you to know that I
have thought about how I would make a sparseness argument for the
objectivity of mathematical truth.  The first step is to get a really good
understanding of the nature of mathematical truth; mathematical truth
originates from the notion of a tautology not the notion of a number.
	You were the one who always emphasised understanding the
subject matter to be learned. There are highly developed precise and
fairly accurate theories of both grammar and mathematical truth.  Yet
now you seem to feel that these theories are not very important, has
you position on understanding what is to be learned changed?)

	I SEE NO WAY TO CONSTRUCT A SPARSENESS ARGUMENT FOR MATHEMATICAL
TRUTH.  That does not say that it could not be done, but it is harder than
you think and I consider the sparseness theory to be a failure here.

	On second thought, Marvin, it would be a good exercise for you
to see if you can construct a sparseness theory of
mathematical truth.  I would like such a theory to contain the
following:

1)  A statement of the space of "theories" or "notions", "languages",
and or "computational structures".  The choice is up to you.

2)  A criterion for judging which "notions" or "theories" or "langauge"
are "better" or more "useful" or more "ok" than others.  Pick your
own criterion.

3)  A concrete argument as to why your interpretations of 1) and 2)
lead to "objective" mathematical truth as mathematicians see it.

	I have worked on theories of mathematical truth for some time.
I think you will find you can not get off the ground without making a lot of
assumptions about innate structure.  Of course there may be some
interpretation of 1) and 2) which justifies your thinking, but
I see no reason to suspect there is.

	David Mc

∂31-Jan-83  1628	MINSKY @ MIT-MC 	innateness, sentences  
Received: from MIT-MC by SU-AI with NCP/FTP; 31 Jan 83  16:28:02 PST
Date: Monday, 31 January 1983  19:20-EST
Sender: MINSKY @ MIT-OZ
From: MINSKY @ MIT-MC
To:   DAM @ MIT-OZ
Cc:   phil-sci @ MIT-OZ
Subject: innateness, sentences
In-reply-to: The message of 31 Jan 1983  18:17-EST from DAM


I am not sure precisely what problem it is you are sleaking of, when
you seek a theory of mathematical truth.  Is it:

     Why are there truths like 2 and 2 is four?
     Why are there more complicated truths with quantifiers, like
          unique factorzation of integers into primes?
     Why are we sure that the inferences we make about such things
          will stand indefinitely - e.g., sure we won'd conterexamples?
     Why are we secure about applying such theories to themselves, e.g.
          in proving that small theories are consistent?
     Why are we sure such inferences apply to anything, e.g., 
          to things we actually count?

In other words, I sense a complicated network of questions connected
with "truth".  Can you explain what part of this network you are
working on?   Am I wrong to introject those psychological elements
like "sure" and "secure"?

I don't think we would have much disagreement about much of the nature
of how to make machines (i) discover some mathematical formulations or
(ii) learn about some and "understand" how to use it to solve problems
of various sorts.  I would expect that we might even agree on various
interesting and useful sense in which a big system might "believe" in
some mathematics and even believe that it has some special status.
But I have the sense that you are looking for something different, a
sense in which there are fragments of mathematics - or mathematical
reasoning" which are true beyond any such embedding in psychology.  My
appeal to sparseness is simply that I suspect that, almost certainly,
there will be such fragments that all "thinking machines" of any
sufficient quality will almost surely discover - like representing
generalizations with quantifiers and then using detachment to get them
back - and, surely, the rest of first-order logic as well.

Is the question you are asking, "why, then, is (say) FOL so remarkably
useful and versatile for such uses"?

If that is the question, and I think it is a good one, I am inclined
to suppose that part of the answer is that it is just about the
simplest set of representations and rules that do this, and that
(sparseness, again) other, better ones are much further down the line
and we have not found them yet.  

The other part of the answer, of why there should be such useful
systems that are so accessible at all that we humans have found them
with but a few thousand years of search (or for evolution to find them
in a few million years, if one prefers), well, I haven't wondered
about that enough.  Could it be that this is related to the question
you asked?

∂31-Jan-83  1712	DAM @ MIT-MC 	innateness, sentences
Received: from MIT-MC by SU-AI with NCP/FTP; 31 Jan 83  17:11:46 PST
Date: Monday, 31 January 1983  20:03-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   MINSKY @ MIT-OZ
cc:   phil-sci @ MIT-OZ
Subject: innateness, sentences


	Date: Monday, 31 January 1983  19:20-EST
	From: MINSKY

	Let me try to more carefully define the problem I am really interested
in.  I ASSUME that there are mathematical statements which can be proven
mathematically.  I do not know exactly what these "mathematical
statements" are.  Are they the english sentences used to represent them?
Certainly not.  Whatever these statements are it seems (empirically)
to be the case that they roughly corrospond in some agreed upon way to the
sentences of first order set theory.  But the corrospondence seems
flawed (to me) because it is intuatively implausible to me that the numbers
are "really" sets (or "really" anything in particular for that matter).
Thus to summarize my first question is:

	1)  What is a mathematical statement?

	Now given some theory of what a mathematical statement is
I am also interested in the notion of truth.  Thus my second question:

	2)  Which mathematical statements are true?

	I am also interested in the pragmatic or engineering nature
of mathematics.  This leads to the third question:

	3)  Why are there mathematical truths?

	I take the first two questions to be empirical psychological
questions about adult human mathematicians.  The answers to these
questions are in principle independent of developmental issues
(such as innateness).  Though it may be that that metamathematics
is best done by considering development.
	It seems to me that before one can have any theory of the
development of mathematical truth (either by an individual or by
a species) one must have a theory of what mathematical truth is.
The theory that it is the provable theorems of ZF set theory is
a very useful precise theory of metamathematics.  An imprecise
theory of metamathematics is that it is the set of "tautological"
truths, whatever those are they seems to be objective, most people
agree about definitional tautologies once they understand them.

	One needs more than a theory of mathematics to present a
particular theory of the development of mathematics, one also needs
a theory of development per say.  What is the space of possible final
states (what is the space of possible adults)?  What is the nature
of the system which is developing (should we take it to be a computer
program, a set of axioms, a "computational system" or what?  What
is its initial state and what are the laws of dynamics which govern
its development?

	I think the really hard question is "what is the nature of the
developing system?"  I see no easy answers here (and I certainly consider
myself to be computationally sophisticated, I have even done lots of
LISP programming).  This question is closely related to the question
of "what is the space in which "sparseness" is evaluted?"  I think
this is an extremely important issue and I would really like a concrete
theory.

	David Mc

∂31-Jan-83  1939	GAVAN @ MIT-MC 	Pre-natal meta-epistemology  
Received: from MIT-MC by SU-AI with NCP/FTP; 31 Jan 83  19:39:02 PST
Date: Monday, 31 January 1983  22:25-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   ISAACSON @ USC-ISI
Cc:   phil-sci @ MIT-MC
Subject: Pre-natal meta-epistemology
In-reply-to: The message of 31 Jan 1983  07:49-EST from ISAACSON at USC-ISI

    From: ISAACSON at USC-ISI

    GAVAN: Why?

    Because I think that noise, activating the hearing system, through
    the use of both ears, may convey the first stereophonic sensation
    of SPACE.

It's unclear to me that sterophonic sound could signify space to a
creature that had not already experienced it or learned about it.
Tactile sensation might have more to do with it, as I believe KDF
hinted earlier.  I believe Kant may have touched on this in his
doctoral dissertation.

    BTW, I thought of another possible, and very interesting, I
    think, source of rhythm and time-related accommodation of the
    mind (via the ears): The Mother's heartbeat.

    p.s.  I concede to having an intuitive predilection that way,
    which may be a weakness, strictly speaking.  [see also my other
    message to phaneron on such matters.]

I share your intuition.

∂31-Jan-83  1952	John C. Mallery <JCMa at MIT-OZ> 	Re:  meta-epistemology, etc.   
Received: from MIT-MC by SU-AI with NCP/FTP; 31 Jan 83  19:52:17 PST
Date: Monday, 31 January 1983, 22:43-EST
From: John C. Mallery <JCMa at MIT-OZ>
Subject: Re:  meta-epistemology, etc.
To: ISAACSON at USC-ISI
Cc: phil-sci at MIT-MC
In-reply-to: <[USC-ISI]30-Jan-83 22:17:16.ISAACSON>


    From: ISAACSON at USC-ISI
    In-Reply-To: Your message of Sunday, 30 Jan 1983, 17:37-EST

    One could argue, of course, that noises happen in "time" and so on,
    but I am inclined to view these (at least tentatively) as being more
    in the nature of "spatial" inputs rather than "temporal".

I you will admit that a fetus is a process, then doesn't it implicitly encode
time by its very definition as a process?

∂31-Jan-83  2016	GAVAN @ MIT-MC 	Languages, tenses  
Received: from MIT-MC by SU-AI with NCP/FTP; 31 Jan 83  20:16:16 PST
Date: Monday, 31 January 1983  23:09-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   LEVITT @ MIT-OZ
Cc:   John C. Mallery <JCMa @ MIT-OZ>, John McCarthy @ su-ai, phil-sci @ mc
Subject: Languages, tenses
In-reply-to: The message of 31 Jan 1983  16:20-EST from LEVITT

    From: LEVITT

    Could hairy syntax have been the key steppingstone that let Europe
    build its technology and dominate the world?

Well, it may have helped, but if it did then the causality here is
probably circular.  That is, European tool-using and nature domination
would seem to have stimulated or even necessitated the use of complex
verb tenses.  What do the silent linguists on this list think?  

    Could hairy tenses also make it easier to implement a personal plan,
    to remember and describe such a plan to oneself?

I think so, but as JMC pointed out it may not be necessary.

∂31-Jan-83  2041	John McCarthy <JMC@SU-AI> 	narrowness        
Received: from MIT-MC by SU-AI with NCP/FTP; 31 Jan 83  20:40:47 PST
Date: 31 Jan 83  2030 PST
From: John McCarthy <JMC@SU-AI>
Subject: narrowness    
To:   dam@MIT-OZ, phil-sci@MIT-OZ

Economists should take the specifics of technology into account, and
linguists should take semantics into account in studying parsing.
The trouble isn't so much that some theories don't take into account
facts that aren't pure economics or linguistics, as the case may be,
but that the fields develop methodologies in terms of which it is seen
as wrong to go outside.


∂31-Jan-83  2046	ISAACSON at USC-ISI 	Re:  meta-epistemology, etc. 
Received: from MIT-MC by SU-AI with NCP/FTP; 31 Jan 83  20:46:29 PST
Date: 31 Jan 1983 2025-PST
Sender: ISAACSON at USC-ISI
Subject: Re:  meta-epistemology, etc.
From: ISAACSON at USC-ISI
To: JCMa at MIT-MC
Cc: phil-sci at MIT-MC, isaacson at USC-ISI
Message-ID: <[USC-ISI]31-Jan-83 20:25:30.ISAACSON>


In-Reply-To: Your message of Monday, 31 Jan 1983, 22:43-EST


JCMa: If you will admit that a fetus is a process, then doesn't
it implicitly encode time by its very definition as a process?


I will gladly admit that a fetus is a process.  In fact, I think
*life*, ALL life, is (an unfolding) process.  I'm tempted to say
unfolding in TIME, as one would normally think about processes,
but I can't bring myself to say that.  I'm stuck with the more
primitive concept of SEQUENTIALITY as a precursor of the concept
of time.

So, I don't know if to think of the fetus as "encoding" time, or
as "manifesting" time, and I'm not sure that there is a
substantial difference between the two views.


∂31-Jan-83  2059	JCMa@MIT-OZ 	Putnam on Chomsky, and Innateness    
Received: from MIT-MC by SU-AI with NCP/FTP; 31 Jan 83  20:58:49 PST
Date: Monday, 31 January 1983, 23:44-EST
From: JCMa@MIT-OZ
Subject: Putnam on Chomsky, and Innateness
To: phil-sci@mc
Cc: Berwick@oz

In his "Reason Truth and History," Hilary Putnam only mentions Chomsky
once.  This is it:

"I will not discuss here the expectation aroused in some by Chomskian
linguitics that cognitive psychology will discover *innate* algorithms
which define rationality.  I myself think that this is an intellectual
fashion which will be disappointed as the logical positivist hope for a
symbolic inductive logic was disappointed."

[Citation: Hilary Putnam, "Reason Truth and History," (Cambridge: Cambridge
	   UniversityPress, 1981), p. 126.]

Comments?

∂31-Jan-83  2103	ISAACSON at USC-ISI 	Re:  Pre-natal meta-epistemology  
Received: from MIT-MC by SU-AI with NCP/FTP; 31 Jan 83  21:03:05 PST
Date: 31 Jan 1983 2049-PST
Sender: ISAACSON at USC-ISI
Subject: Re:  Pre-natal meta-epistemology
From: ISAACSON at USC-ISI
To: GAVAN at MIT-MC
Cc: phil-sci at MIT-MC, isaacson at USC-ISI
Message-ID: <[USC-ISI]31-Jan-83 20:49:31.ISAACSON>



In-Reply-To: Your message of Monday, 31 Jan 1983, 22:25-EST


GAVAN: Tactile sensation might have more to do with it, as I
believe KDF hinted earlier.

I don't recall KDF saying that.  I think I said that yesterday,
and I certainly agree with it.

I don't know much about audio perception of space, certainly not
in fetuses.  We have here, in St.  Louis, the Central Institute
for the Deaf and I may ask some of the people I know over there,
if they lend me their ear.

GAVAN: I believe Kant may have touched on this in his doctoral
dissertation.


Sorry, never read that document.  It might be interesting to get
the relevant excerpts.

GAVAN: I share your intuition.


Here is one of those few things whose intrinsic value to each
individual holder only increases through sharing.


∂31-Jan-83  2126	John McCarthy <JMC@SU-AI> 	CORRESPONDENCE, etc. and meta-epistemology again     
Received: from MIT-MC by SU-AI with NCP/FTP; 31 Jan 83  21:25:46 PST
Date: 31 Jan 83  1818 PST
From: John McCarthy <JMC@SU-AI>
Subject: CORRESPONDENCE, etc. and meta-epistemology again 
To:   phil-sci@MIT-OZ  

This rather long exposition is not intended primarily as an argument
for the correspondence theory of truth, although it presents my
position on the relations among CORRESPONDENCE, COHERENCE and CONSENSUS.
It is addressed primarily to people who are already base their thinking
on some kind of correspondence theory and outlines some research ideas.
The primary idea is that abstract meta-epistemology is worth studying.
By abstract I mean that we consider a knowledge seeker in a world,
and we consider the effectiveness of strategies as functions of the
world.  For this purpose, it is often appropriate to consider model
worlds that are not candidates as theories of the real world.
By studying strategies in abstract worlds, both theoretically and
experimentally, we may develop candidate strategies for application
to the real world.  These candidate strategies may discussed as
philosophies of science and imbedded in programs interacting with
the incompletely known physical and mathematical worlds.

Even meta-level questions such as the appropriate theory of truth may
be studied in these abstract systems.

Here are my views on the relations between CORRESPONDENCE, COHERENCE,
and CONSENSUS baldly stated.  Arguments are later.

1. The truth of a statement about the world is defined by its
CORRESPONDENCE to the facts of the world.  The truth of a statement about
mathematical objects is determined by its correspondence to the facts
about these mathematical objects.  Of course, both of these presuppose the
existence of the world and of mathematical objects.  Tarski says: "Snow is
white" is true if snow is white.  This has an unfortunate but inevitable
circularity, because we use language for talking about the world, and
we're talking about sentences in the same language.  The circularity has
the consequence that the definition doesn't itself provide a means of
determining facts about the world.  Unfortunate but inevitable.

2. Our means of trying to determine the truth involves the COHERENCE
of large collections of statements including reports of observation.
We do not take COHERENCE as the definition of truth, because we always
want to admit the possibility that a collection of statements may
be coherent but wrong.  Naturally, we will only come to believe that
it is wrong if some other collection of statements is found to
be more COHERENT, but the new one may be wrong also.

3. CONSENSUS is a mere sociological phenomenon whereby groups of
people come to more or less agree about the truth of somm collection
statements.  At any given time there may or may not be CONSENSUS in
various groups of people.

Meta-epistemology again:

A Toy mathematical example illustrating use of the above concepts:

	Consider a mathematical system consisting of a computer
C, a language L, and a collection D of interacting automata
to which C is connected.  We suppose that the language L includes
a predicate symbol  holds, and we interpret  holds(i,s,t)  as
asserting that the  i th subautomaton of  D  is (was) in state
s  at time  t.  We further interpret a certain list B
of sentences in the memory of the computer as the list of what
the program BELIEVES.  Sentences elsewhere in memory are considered
mere data.  We suppose that the automaton system, including the
computer part is started in some initial configuration.  At some
times during the operation of the system, certain sentences will
be in the list  B.  Suppose  holds(17,5,200)  is in that
list at some time  t1.  We regard it as true, and the program as
BELIEVING it correctly if subautomaton 17 is in state 5 at time 200.
In fact, whether the program BELIEVES it is irrelevant to its truth,
since its truth depends on the evolution of the automaton system,
in interaction with the program, and not on the contents of the list
B.  However, our interest is in designing knowledge seeking programs,
and we are interested in what programs connected to what automaton
worlds will have lots of true beliefs.

	One important class of programs, to be compared in effectiveness
with others, are programs that use data structures interpretable
as sentences about the world, mathematics, goals, etc. - in short
the kind of program now used in much AI research.  The program
may be provided with an initial stock of sentences.  Some of these
sentences may be regarded as presuppositions about the kind of
world to which the program is connected.  Of course,
it wouldn't be interesting to include such assertions as  holds(17,5,200)
in the presuppositions, and then admire the result if
the program moves the sentence to the list  B.  In evaluating
programs, it would be most interesting to consider connecting them
to a variety of automaton systems in a variety of initial states.

Remarks:

	1. There will be programs that can be ascribed more true
beliefs if we use a different language and some other location
than in the list  B.  Indeed programs that evolve intelligence
are unlikely to use this specific langauge.  However, since we
are talking about DESIGNING the program, it is difficult enough
to make it smart in the way we intend and quite unlikely that
it will turn out to be smart in some entirely different interpretation.
Therefore, we'll stick to the language  L  and the list  B.

	2. Finite automaton worlds are discussed as an example only.
If I were smarter and I thought your patience were greater, I would
have the program interacting with systems more like those discussed
by current theories of physics.  Even within the automaton model,
there are more interesting kinds of assertions than  holds(i,s,t)
which is rather like an assertion that a particular molecule
has a certain position and velocity at a given time.  Assertions
about the structure of the system of automat, e.g. what is connected
to what more closely resemble present day scientific assertions.
Indeed the "obstacles and roofs" world that I mentioned earlier
is EPISTEMOLOGICALLY and HEURISTICALLY more like our own world.
The automaton model is only METAPHYSICALLY ADEQUATE for our present
purpose.  (These terms are used in the sense of McCarthy and Hayes "Some
Philosophical Problems from the Standpoint of Artificial Intelligence").

	3. What kinds of programs should we design?  This depends
on what kinds of automaton systems we intend to connect to the
program.  If we connect it to worlds that behave like those
that behaviorist psychologists were in the habit of connecting to
their rats and sophomores stimulus-response models of the world
will be fine.  Indeed the sentences of the form  holds(i,s,t)  may
be quite superfluous for success in worlds designed by behaviorists,
and sentences like  responds(s,r)  interpreted as "If I give it
the signal  s  I will get back the response  r"  may be more
appropriate.  In terms of its ability to predict, a correspondence
concept of truth will be irrelevant, because the behaviorist will
have designed his automaton system to give the intended responses
to the stimuli, and the actual mechanism whereby this is done
will be hidden from the program.

	However, we might consider designing programs of a different
kind.  These programs would hypothesize very large systems
of interacting simple automata connected in a regular way and
such that the inputs to the program were the result of averages
of large numbers of "microscopic" events.  The states and transitions
of the individual microscopic automata would not affect the
inputs of the program, i.e. would not be observable.
Nevertheless, the laws connecting the individual "microscopic"
automata would be permanent features of the world and could
be used to explain and predict otherwise unpredictable events
that were more directly observable.

	In worlds that I would design, such strategies would
be more effective than strategies hypothesizing stimulus-response
laws.  I would design such worlds for my program, because I
believe such worlds are moe like the world to which I am connected
and of which I am a part.

	4. If we give the program worlds composed of interacting
parts, sentences interpreted as asserting that the world is so
constructed would be true.  Moreover, research programs aimed at
discovering such parts, their internal structures and their interactions
would be likely to generate true beliefs, and would be more successful
than other strategies in predicting the experiential consequences
of actions.  This is almost tautologous, since if we connected a
program a world constructed of interacting parts, its beliefs will
be true if they assert this fact, and its predictions of the experiential
consequences of actions are more likely to be correct if the strategy
takes into accoun the facts.  Therefore, this is not evidence for
the appropriateness of a correspondence theory of truth in dealing
with human experience.  However, if humans have in fact evolved
in a world composed of interacting parts, then considering
epistemological models of the kind proposed here can help us
devise intelligent strategies for learning programs.

	5. Besides its "official beliefs" which I would have the
program exhibit for my inspection on the list  B, the program
would keep on various lists many other kinds of sentences "about" the laws
of interaction of the automata making up the world.  We could inspect
these lists and try to interpret the sentences as assertions about
its world.  Sometimes we would succeed and interpret the sentences
as true or false.  Sometimes we would fail and say that a certain
sentence has no clear interpretation because the concepts are
confused.  For example, some sentence might be analogus to one
about how much phlogiston a rat produces per day.

	6. In certain kinds of world, the best strategy for
accumulating beliefs would be a COHERENCE strategy.  The strategy
would have collections of assertions about large numbers of aspects
of the world, some of which would be alternatives to each other.
A strategy that put in the list  B  of official beliefs the
most COHERENT collections of assertions would probably be
most effective in generating beliefs that CORRESPOND to its
world.  It would also be most effective in predicting the experiential
consequences of actions.

	7. If the knowledge seeking program were composed of many
semi-independent subprograms, each connected to the automaton world
in a different way, strategies of co-operation might well develop.
Such strategies might involve inter-knower lists of of beliefs
obtained by CONSENSUS.  This is especially likely if the individual
knowers were limited by short lives from independent access to the
phenomena and so were forced to develop collective institutions
of science.

	8. So far our epistemological statements have all been at
the meta level.  We have discussed the beliefs and truth seeking
strategies of the programs in the automaton world from outside
that world.  If the world is complex, and complex worlds are the
primary interest, it will sometimes be effective for the program
itself to have theories of truth and belief and use these theories
in its knowledge seeking strategy.  We might, for example, include
sentences expressing such meta-beliefs in the initial supply of
sentences we give the program.  We might include the whole general
theory including a CORRESPONDENCE theory of truth, a COHERENCE
strategy of search and a CONSENSUS theory of co-operation
in the initial stock of sentences provided we could formalize
it suitably.  We might try out rival theories, suitably formalized.
Alternatively, we might leave out any theories and see if they
develop.

	9. As long as we provide a language  L  and examine what
sentences in it appear in the list  B,  we minimize our problems
of interpretation.  However, if the system develops other languages,
or if we adopt some more "natural" approach than having a  L  and
B,  we will have problems of whether certain data structures can
be interpreted as sentences making assertions about the world, i.e.
in inventing a translation rule into the language  L  or whatever
language we use for describing the world.  However, I don't think
we will face a problem of having to alternate translations that
both "make sense".  As I said in a previous message, cryptography
experience and the Shannon theory suggest that such problems are
extremely unlikely provided we take symmetries and isomorphisms
into account.  My paper "Ascribing Mental Qualities to Machines"
discusses some of these points.

	10. Symmetries and isomorphisms of the world or parts of
it raise interesting problems.  The world to which we connect
the computer may have symmetries and it may be isomorphic to
structures other than those we design.  The program may consider
rival theories and then discover that they are isomorphic.
If we consider final theories of the whole world, the preferred outcome
is clear.  It should find the isomorphic theories and
recognize their isomorphism.  Moreover, many isomorphisms
can be kept implicit by using a formalism that is canonical
with respect to the transformations involved.

	However, we are not primarily interested in programs
that will create a final theory of the world write it in list
B  and then stop.  More facts may break an isomorphism, so the
machine must be more sophisticated.  On the one hand, it can't
spend time trying to decide between theories isomorphic with
regard to the means it has for interacting with the phenomenon
involved.  On the other hand, it needs should keep the equivalent
theories on hand just in case the equivalence breaks down later.

	11. All this is methodology intended as a guide to research.
There are two directions in which research might proceed, theoretical
and experimental.  On the one hand, we can develop theories of
what can be found out about what kinds of worlds.  E. F. Moore's
"Gedanken Experiments with Sequential Machines" in Automata Studies
should be read by anyone who contemplates research in this area.
Its merit is that it makes important distinctions and proves some
theorems about investigating automaton worlds.  Its fault is that
these worlds have too little structure for a sophisticated research
strategy to be effective.  They aren't as bad as the stimulus-response
worlds, however, since at least they contain memory.

	I fear that it is beyond our present knowledge to formulate
sophisticated conjectures about the effectiveness of different theories
of truth in guiding research in automaton worlds.  I suppose the
theoretical state of meta-epistemology is that we need to work on
establishing interesting conjectures.

	Experimental research in this area seems inappropriate at
present until there are some conjectures.  For example, a program
for solving "obstacles and roofs" worlds might turn out to be just an
exercise in programming.  I would also be uninterested in a proof
that "obstacles and roofs" is NP-complete.

	12. It might be interesting for an adherent of the COHERENCE
theory of truth to attempt a meta-epistemological model.  I wouldn't
know how to begin.  He might start in the same way as I did - consider
systems consisting of a computer program connected to something with
which it interacts.  My knowledge seeker attempts to find out the
structure of the something.  However, just considering the knowledge
seeker connected to something involves a something, i.e. a world,
and makes the problem one of finding out about the world.  Someone
who rejects "the world" and an associated correspondence theory
might well consider that these have already been presumed by
connecting the program to something.  Well, that's their problem.
Perhaps even Gedanken experiments are inappropriate from their
point of view.

	Here is another way of putting the question.  Are Gedanken
experiments or real experiments with knowledge seeking programs
appropriate from the point of view of any non-correspondence
theory of truth?  If so, what is the experimental environment of
the program, and what kinds of sentences does it attempt to ascribe
truth to?  Would an obstacles-and-roofs world be appropriate,
or does it presume to much of a "real world"?

	13. Finally, I hope for some reaction, which is why I wrote
this. The reaction I hope for, isn't primarily further debate on
the correctness of the CORRESPONDENCE theory or even applause for
stoutly maintaining it, although I am willing to take part in limited
further debate.  Aside to GAVAN: I have not specifically attacked
coherence or consensus theories, because I have not formulated straw
men to be attacked.  However, if you formulate something to attack,
I'll attack it if I disagree with it.

	I mainly seek reaction to the idea of research in
abstract meta-epistemological models, i.e. the theory of knowledge
seeking programs connected with abstract worlds.  Are there interesting
conjectures about what strategies and what presuppositions
will succeed in what worlds?  Are experiments and appropriate,
and which?

	Also, I would like to know if people find the ideas clear and/or
interesting or whether they require more detailed exposition to be even
comprehensible.  This length is my limit for this forum, but it may be
appropriate to try to develop more specific research questions if there is
interest.

∂31-Jan-83  2145	GAVAN @ MIT-MC 	Determinate Being  
Received: from MIT-MC by SU-AI with NCP/FTP; 31 Jan 83  21:45:00 PST
Date: Tuesday, 1 February 1983  00:35-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   John C. Mallery <JCMa @ MIT-OZ>
Cc:   ISAACSON @ USC-ISI, phil-sci @ MIT-MC
Subject: Determinate Being
In-reply-to: The message of 31 Jan 1983 22:43-EST from John C. Mallery <JCMa>

    From: John C. Mallery <JCMa>

    I you will admit that a fetus is a process, then doesn't it
    implicitly encode time by its very definition as a process?

Of course if your ontology includes the notion of determinste being, 
then you believe everything is a process.  Even a brick.

∂31-Jan-83  2232	MINSKY @ MIT-MC 	innateness, sentences  
Received: from MIT-MC by SU-AI with NCP/FTP; 31 Jan 83  22:31:54 PST
Date: Tuesday, 1 February 1983  01:16-EST
Sender: MINSKY @ MIT-OZ
From: MINSKY @ MIT-MC
To:   DAM @ MIT-OZ
Cc:   phil-sci @ MIT-OZ
Subject: innateness, sentences
In-reply-to: The message of 31 Jan 1983  20:03-EST from DAM


That was very helpful.  

DAM:  The theory that it is the provable theorems of ZF set theory is
     a very useful precise theory of metamathematics.  ...An imprecise
     theory of metamathematics is that it is the set of "tautological"
     truths, whatever those are they seems to be objective, most people
     agree about definitional tautologies once they understand them.

I'm not sure what the latter are.  What happens when one thinks about
very small domains, e.g., a domain of, say, two points A and B, and
statements like (Whichever X you choose, there is another one).  Is
this an instance of a definitional tautology (since it would seem to
be in the nature of what "two points" must be defined as)?  Do you get
the same basic questions for such issues?  In other words, have you
the same concerns for small scale truths as well.  Or is it the more
general mathematical truth problem apparently no harder, so one might
as well deal with it?

∂31-Jan-83  2242	LEVITT @ MIT-MC 	"primitive" representations of space and time   
Received: from MIT-MC by SU-AI with NCP/FTP; 31 Jan 83  22:41:48 PST
Date: Tuesday, 1 February 1983  01:14-EST
Sender: LEVITT @ MIT-OZ
From: LEVITT @ MIT-MC
To:   ISAACSON @ USC-ISI
Cc:   JCMa @ MIT-OZ, phil-sci @ MIT-MC
Subject: "primitive" representations of space and time
In-reply-to: The message of 31 Jan 1983  23:25-EST from ISAACSON at USC-ISI

    Date: Monday, 31 January 1983  23:25-EST
    From: ISAACSON at USC-ISI
    To:   JCMa
    cc:   phil-sci at MIT-MC, isaacson at USC-ISI
    Re:   meta-epistemology, etc.

    In-Reply-To: Your message of Monday, 31 Jan 1983, 22:43-EST
   ...  I'm tempted to say
    unfolding in TIME, as one would normally think about processes,
    but I can't bring myself to say that.  I'm stuck with the more
    primitive concept of SEQUENTIALITY as a precursor of the concept
    of time.

When people discuss the formation of spatial representations, they
don't get stuck so easily (or maybe, they have different WAYS to get
stuck) as when worrying about time, since the retina provides one
obvious anatomical picture of one "primitive" representation.  There
are probably equally specialized organs, say in the temporal lobe,
that segment time -- especially for representing periodic things
-- but we don't know much about them.  (Of course, in the ear the
basilar membrane makes rapid periodicity tractable by inventing
"pitch".)  Anyway, there's no reason we can't invent our own
"primitives" to think productively about temporal reasoning without
understanding the anatomy -- like Waltz and Evans, who didn't have to
wait for low-level vision programs to work to make great discoveries
about thinking about line drawings.  That decoupling still seems to be
the big AI "meta-breakthrough".

    So, I don't know if to think of the fetus as "encoding" time, or
    as "manifesting" time, and I'm not sure that there is a
    substantial difference between the two views.

"Manifesting time"??  To me this sounds like Heidegger, who often
surpasses Hegel as most annoying philosopher.  (In philosophical
reading, my main filter discards work that uses "being" or "existence"
as the subject of a sentence.)  What do you mean?

∂31-Jan-83  2326	GAVAN @ MIT-MC 	There you don't go again, JMC.    
Received: from MIT-MC by SU-AI with NCP/FTP; 31 Jan 83  23:26:07 PST
Date: Tuesday, 1 February 1983  00:42-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   BATALI @ MIT-OZ
Cc:   John McCarthy <JMC @ SU-AI>, phil-sci @ MIT-OZ
Subject: There you don't go again, JMC.   
In-reply-to: The message of 31 Jan 1983  11:09-EST from BATALI

    From: BATALI

        From: GAVAN

        You have already defended the correspondence theory "to the max", as
        they say in California.  Yet your denials of both the consensus and
        coherence theories have not been reasoned critiques of them, but
        rather ad hominem attacks against the person presenting them.  If you
        don't like the message, criticize IT -- not the messenger.

    As one on the correspondence side, let me say that I am not against
    the coherence of the coherence view and I consent to consensus.  I
    won't criticise these views because they are right.  I won't. I won't. I
    won't. And you can't make me.

I issued the challenge to JMC, not to you, BATALI.  JMC is the one who
categorically denies the coherence theory of truth and called the
consensus theory "muddled."  Other than that he has limited his
discussions to defenses of the correspondence dogma.  He has not
bothered to present reasoned critiques of the coherence and consensus
theories.  If his adherence to the correspondence theory and his
denial of the coherence and consensus views are anything other than
irrational, dogmatic prejudices, I would like to hear his rationale.

∂31-Jan-83  2354	John McCarthy <JMC@SU-AI> 	criticism of coherence and consensus       
Received: from MIT-MC by SU-AI with NCP/FTP; 31 Jan 83  23:54:22 PST
Date: 31 Jan 83  2350 PST
From: John McCarthy <JMC@SU-AI>
Subject: criticism of coherence and consensus   
To:   gavan@MIT-OZ
CC:   phil-sci@MIT-OZ  

As I remarked in my long message of 1818PST, which you may not have
got to yet or noticed the aside to you in it, I will give my opinions
of coherence and consensus if you give me a summary of your views,
or if you would rather references to previous messages or the literature.
I have read the article on the coherence theory in the Encyclopedia of
Philosophy, but the author of the article doesn't seem much more friendly
to the idea than I am, so I would prefer to criticize a presentation
by a partisan of it.

	The index to the Encyclopedia mentions consensus only in
connection with the  consensus gentium  argument for the existence
of God, so I suppose the "consensus theory of truth" is due to Kuhn
or Feyerabend or someone like that.  I don't promise to pursue
them very far, because, believe it or not, I am trying to cure myself
of being a controversialist, and will go only to limited lengths in
trying to win arguments.  The little I have read of Kuhn has left me
with the impression that there is unlikely to be anything useful for
AI in what he says.  I would, as it happens, find a reference to Putnam
more interesting, and I have found my copy of volume 2.


∂01-Feb-83  0033	GAVAN @ MIT-MC 	criticism of coherence and consensus   
Received: from MIT-MC by SU-AI with NCP/FTP; 1 Feb 83  00:31:18 PST
Date: Tuesday, 1 February 1983  03:11-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   John McCarthy <JMC @ SU-AI>
Cc:   phil-sci @ MIT-OZ
Subject: criticism of coherence and consensus   
In-reply-to: The message of 31 Jan 83  2350 PST from John McCarthy <JMC at SU-AI>

    From: John McCarthy <JMC at SU-AI>

    As I remarked in my long message of 1818PST, which you may not have
    got to yet or noticed the aside to you in it, I will give my opinions
    of coherence and consensus if you give me a summary of your views,
    or if you would rather references to previous messages or the literature.

Yes, that message just came in.  I want to print it out so I can read it while
I'm on jury duty today (hopefully the prosecutor or defense attorney won't ask
me about questions of truth, proof, or evidence).  But the Dover is down again.
I'll try to print it out elsewhere tonight and respond as soon as I can.

    I have read the article on the coherence theory in the Encyclopedia of
    Philosophy, but the author of the article doesn't seem much more friendly
    to the idea than I am, so I would prefer to criticize a presentation
    by a partisan of it.

    	The index to the Encyclopedia mentions consensus only in
    connection with the  consensus gentium  argument for the existence
    of God, so I suppose the "consensus theory of truth" is due to Kuhn
    or Feyerabend or someone like that.  

The consensus theory is alluded to in Putnam's *Reason, Truth, and
History* and, as you suspected, it lurks in the background of the
debates between Kuhn, Feyerabend, et al.  The best explication of it,
however, can be found in Jurgen Habermas' "Theories of Truth."
Unfortunately, it hasn't yet been published in English.  I have a copy
of a translation by Tom McCarthy (Philosophy Department, Boston
University).  I'll try to mail you a copy within the next few days, if
I'm not sequestered.  Meanwhile, McCarthy presents a summary of
Habermas' philosophy in *The Critical Theory of Jurgen Habermas* (MIT
Press).

    I don't promise to pursue
    them very far, because, believe it or not, I am trying to cure myself
    of being a controversialist, and will go only to limited lengths in
    trying to win arguments.  

Persuasion is a difficult task.  It takes much effort, and offers few
rewards.  For me, the point is not to win arguments, but to clarify
the issues.

    The little I have read of Kuhn has left me
    with the impression that there is unlikely to be anything useful for
    AI in what he says.  I would, as it happens, find a reference to Putnam
    more interesting, and I have found my copy of volume 2.

The Kuhn-Popper-Lakatos-Feyerabend debate was under discussion on this
list because Carl Hewitt is interested in the issue for reasons
relating to his research interests.  See Hewitt and Kornfeld's MIT-AI
Lab Memo, "The Scientific Community Metaphor".  I really recommend
Putnam's *Reason, Truth, and History*.  It relates to the
Kuhn-Feyerabend debate, and also includes a lengthy critique of the
correspondence theory.  Putnam wants to replace it with a coherence
theory.  I'll argue as best I can, but Putnam might be more persuasive
for you.

∂01-Feb-83  0037	JCMa@MIT-OZ 	Languages, tenses
Received: from MIT-MC by SU-AI with NCP/FTP; 1 Feb 83  00:36:53 PST
Date: Tuesday, 1 February 1983, 03:21-EST
From: JCMa@MIT-OZ
Subject: Languages, tenses
To: LEVITT@MIT-MC
Cc: phil-sci@mc
In-reply-to: The message of 31 Jan 83 16:20-EST from LEVITT at MIT-MC

    From: LEVITT @ MIT-MC
    Subject: Languages, tenses
    In-reply-to: The message of 30 Jan 1983 15:01-EST from John C. Mallery <JCMa>

    This seems to be one of the more plausible manifestations of the
    homily "language limits thought".  It seems inevitable that the richer
    vocabulary of tenses a language has -- especially subjunctive and
    perfect tenses -- the more tractable it will be to describe complex
    plans, with concurrencies and contingencies, e.g. build a machine.

I suppose it might.  Presumably you can accomplish the same tasks either
way.  Thus, reduction in the required amount of problem solving due to
use of the syntactic approach could be more efficient; but I bet this
would only be a marginal improvement because lots of problem solving
remains to be done for all those things that aren't syntactically
reducible.

    Without this fluency it must be very hard to describe such a plan to
    someone else if help is needed.

I don't see why.  All that is needed is to develop some semantic conventions
for expressing the same things.  Of course, it would remove the need to develop
the conventions, and that might facilitate plan description, although this
wouldn't matter once the conventions were in place.  

    Could hairy syntax have been the key steppingstone that let Europe
    build its technology and dominate the world?

I doubt it.  Social organization was much more important in fostering the
industrial revolution.  Countries in which a commmercial bourgeoisie could
develop, and evolve into industrial bourgeosies, were the ones that did best.
Countries with strong oligarchies did the worst.

    Could hairy tenses also make it easier to implement a personal plan,
    to remember and describe such a plan to oneself?

Perhaps.

∂01-Feb-83  0040	JCMa@MIT-OZ 	Re:  meta-epistemology, etc.    
Received: from MIT-MC by SU-AI with NCP/FTP; 1 Feb 83  00:40:09 PST
Date: Tuesday, 1 February 1983, 03:32-EST
From: JCMa@MIT-OZ
Subject: Re:  meta-epistemology, etc.
To: ISAACSON@USC-ISI
Cc: phil-sci@MIT-MC
In-reply-to: <[USC-ISI]31-Jan-83 20:25:30.ISAACSON>

    From: ISAACSON at USC-ISI
    Message-ID: <[USC-ISI]31-Jan-83 20:25:30.ISAACSON>
    In-Reply-To: Your message of Monday, 31 Jan 1983, 22:43-EST

    So, I don't know if I think of the fetus as "encoding" time, or
    as "manifesting" time, and I'm not sure that there is a
    substantial difference between the two views.

It seems that something which encodes must have an explicit
representation of what it encodes.  On the other hand, something which
manifests may or may not have an explicit representation.  That's a big
difference!

∂01-Feb-83  0138	JCMa@MIT-OZ at MIT-MC 	Putnam: Correspondence, Tarski, and Truth 
Received: from MIT-MC by SU-AI with NCP/FTP; 1 Feb 83  01:38:01 PST
Date: Tuesday, 1 February 1983, 04:33-EST
From: JCMa@MIT-OZ at MIT-MC
Subject: Putnam: Correspondence, Tarski, and Truth
To: phil-sci@MIT-OZ at MIT-MC

These quotes are from chapter six "Fact and Value" in Hilary Putnam,
Reason, Truth, and History, (Cambridge: Cambridge University Press,
1981), pp. 127-129.  Comments and rebuttals?

"Questions in philosophy of language, epistemology, and even in
metaphysics may appear to be questions which, however interesting, are
somewhat optional from the point of view of most people's lives.  But
the question of fact and value is a forced choice question.  Any
reflective person HAS to have a real opinion upon it. . . .  If the
question of fact and value is forced choice question for reflective
people, one particular answer to that question, the answer that fact and
value are totally disjoint realms, that the dichotomy `statement of fact
OR value judgment' is an absolute one, has assumed the status of a
cultural institution. . . .

The defenders of the fact-value dichotomy concede that science does
presuppose some values, for example, science presupposes that we want
TRUTH, but argue that these values are not ETHICAL values. . . .  

The idea that truth is a passive copy of what is `really'
(mind-independently, discourse-independently) `there' has collapsed
under the critiques of Kant, Wittgenstein, and other philosophers even
if it continues to have a deep hold on our thinking. . . .

Some philosophers have appealed to the EQUIVALENCE PRINCIPLE, that is TO
SAY OF A STATEMENT THAT IT IS TRUE IS EQUIVALENT TO ASERTING THE
STATEMENT, to argue that there are no real philosophical problems about
truth.  Others appeal to the work of Alfred Tarski, the logician who
showed how, given a formal language . . . , one can define `true' FOR
THAT LANGUAGE in a stronger language (a so-called `meta-language').

Tarski's work was itself based on the equivalence principle: in fact his
criterion for a successful definition of `true' was that it should yield
all sentences of the form `P' is true if and only if P, e.g.

   (T) `Snow is white' is true if and only if snow is white

as theorems of the meta-language (where P is a sentence of the formal
notation in question).

But the equivalence principle is philosophically neutral, and so is
Tarski's work.  One ANY theory of truth, `Snow is white' is equivalent
to `"Snow is white" is true.'

Positivist philosophers would reply that if you know (T) above, you KNOW
what `"Snow is white" is true' means: it means SNOW IS WHITE.  And if
you don't understand `snow' and `white', they would add, you are in
trouble indeed!  But the problem is not that we don't understand `Snow
is white'; the problem is that we don't understand WHAT IT IS TO
UNDERSTAND `Snow is white.'  This is the philosophical problem.  About
this (T) says nothing."

∂01-Feb-83  0342	LEVITT @ MIT-MC 	Putnam: Correspondence, Tarski, and Truth  
Received: from MIT-MC by SU-AI with NCP/FTP; 1 Feb 83  03:41:00 PST
Date: Tuesday, 1 February 1983  06:02-EST
Sender: LEVITT @ MIT-OZ
From: LEVITT @ MIT-MC
To:   JCMa @ MIT-OZ
Cc:   phil-sci @ MIT-OZ
Subject: Putnam: Correspondence, Tarski, and Truth
In-reply-to: The message of 1 Feb 1983 04:33-EST from JCMa

I'm puzzled: why are we still discussing Truth?  I seem to remember
you (JCMa) along with most of the other participants, saying things to
the effect that absolute, universal truth doesn't exist (except
perhaps tautologically in some formal systems).  I didn't save your
message, but I remember being surprised that after disavowing belief
in the concept, you went on to argue a point about it or quote an idea
on it.  Argument by proxy is OK I guess, but if we ALL agree its the
wrong tree, why bother?

Maybe the problem is that noone argued directly enough against BATALI,
who made a distinction between scientists and engineers, claiming that
scientists seem to seek "truth".  MINSKY, KDF and others suggested
more meaningful substitutes for "true", like "reliable within certain
well-marked boundaries" or "characterizing many experiments with a
short description" or simply "useful" -- of which seemed more
structured and satisfactory.  What do we gain then from Tarski's idea
about what truth REALLY is, or a discussion by Putnam about Tarski
that begins

   The defenders of the fact-value dichotomy concede that science does
   presuppose some values, for example, science presupposes that we want
   TRUTH, but argue that these values are not ETHICAL values. . . .  

Am I the only one with this impression?  It could be that I'm just
decked by what's now a stack of vicarious arguments.  I'm still
recovering from Gavan and Hewitt arguing at length about Feyerabend
when neither of them would take Feyerabend's position.  With 2K years
of writing to survey (and I've already argued against survey courses),
could we at least retrict ourselves to arguments WE think are
understandable and plausible?  There's no drama in an discussion that
goes "defenders of X believe Y" -- it's like watching people watch TV.

JCMa -- do you think Putnam's argument is important, or are you just
satisfying a curiosity someone expressed earlier in the discussion?

∂02-Feb-83  1823	BATALI @ MIT-MC 	And on his farm there was a cow  
Received: from USC-ECLC by SU-AI with NCP/FTP; 2 Feb 83  18:22:56 PST
Received: from MIT-MC by USC-ECLC; Wed 2 Feb 83 18:18:41-PST
Date: Wednesday, 2 February 1983  21:12-EST
Sender: BATALI @ MIT-OZ
From: BATALI @ MIT-MC
To:   MINSKY @ MIT-OZ
Cc:   DAM @ MIT-OZ, GJS @ MIT-OZ, phil-sci @ MIT-OZ
Subject: And on his farm there was a cow
In-reply-to: The message of 2 Feb 1983  15:22-EST from MINSKY

    From: MINSKY

    What I don't understand is what you mean to say
    that, from the start, "we know that there are cows".  Do we know that
    "there is GOOD"?  What is different about cows is the attachment to
    all the expertise we have about sensory objects.  It isn;t that we
    know there are cows, but that we set ourselves to use "cow" like we
    use words for things already familiar, like animals.

Let's leave "good" aside for the moment.  Do you really think that we
don't know that there are cows?  Do we know anything?

∂02-Feb-83  2328	ZVONA @ MIT-MC 
Received: from USC-ECLC by SU-AI with NCP/FTP; 2 Feb 83  23:28:18 PST
Received: from MIT-MC by USC-ECLC; Wed 2 Feb 83 23:23:53-PST
Mail-From: ZVONA created at  2-Feb-83 11:23:07
Date: Wednesday, 2 February 1983  11:23-EST
Sender: ZVONA @ MIT-OZ
From: ZVONA @ MIT-MC
To:   cm-i @ MIT-OZ
Redistributed-to: phil-sci at MIT-OZ at MIT-MC
Redistributed-by: JCMa at MIT-OZ at MIT-MC
Redistributed-date: Thursday, 3 February 1983, 02:20-EST

"The brain's most important property may simply be its enormous
collection of neurons.  In the same way that collections of molecules
possess properties not found in individual molecules, suggests
CalTech's John Hopfield, elements of thought may arise spontaneously
from large collections of neurons.  Researchers at MIT are now
building a "connection machine," a computer with a million
interconnected processors that will test Hopfield's idea."

-- Newsweek

[Ever hear of this Hopfield character?]

∂03-Feb-83  0106	MINSKY @ MIT-MC
Received: from USC-ECLC by SU-AI with NCP/FTP; 3 Feb 83  01:06:11 PST
Received: from MIT-MC by USC-ECLC; Thu 3 Feb 83 01:04:30-PST
Mail-From: MINSKY created at  3-Feb-83 03:30:51
Date: Thursday, 3 February 1983  03:30-EST
Sender: MINSKY @ MIT-OZ
From: MINSKY @ MIT-MC
To:   ZVONA @ MIT-OZ
Cc:   cm-i @ MIT-OZ
In-reply-to: The message of 2 Feb 1983  11:23-EST from ZVONA
Redistributed-to: phil-sci at MIT-OZ at MIT-MC
Redistributed-by: JCMa at MIT-OZ at MIT-MC
Redistributed-date: Thursday, 3 February 1983, 03:58-EST

I know Hopfield, who seems like areasonable person who has theories
about synaptic learning in lower animals.  But I haven't read his
theory.

The same Newsweek article quoted me as saying something like "Logic
doesn't work -ever".  What I said was something complicated like that
outside of mathematics, one cannot find simple, general statements
that are unconditionally true, since there are always exceptions or
complicated side conditions.  I haven't found any convincing
counterexample of this; the only way I know to protect the logic
- which I believes works very well indeed - from the fact that
assumptions are always imperfect outside of mathematics - is to
use something like McCarthy's extra predicate which amounts to
adding "unless something prevents X".

Anyway, I would assume that the bug, as usual, is in Newsweek and not
in Hopfield.  Besides, I think the connection machine is in fact
likely the only way to test neural net theories, so the story may turn
out to be truer than that reporter had any right to expect.

∂03-Feb-83  0122	DAM @ MIT-MC 	Meta-epistemology    
Received: from USC-ECLC by SU-AI with NCP/FTP; 3 Feb 83  01:21:54 PST
Received: from MIT-MC by USC-ECLC; Thu 3 Feb 83 01:21:08-PST
Date: Tuesday, 1 February 1983  13:48-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   JCM @ SU-AI
cc:   phil-sci @ MIT-OZ
Subject: Meta-epistemology


	Ok I will embrace your meta-epistemological outlook, at least
for the sake of discussion.  Consider a world and an observer in that
world, i.e. assume a corrospondence God's eye view of SOME
HYPOTHETICAL cognitive agent, lets call him Robby the Robot.  Tarskian
semantics provides a precise definition for a relationship between the
sentences Robby might believe and Robby's world (if anti-correspondence
people deny my the right to CONSIDER taking such a God's eye view of a
HYPOTHETICAL and DEFINED situation I will ignore them).  The question
I am interested in here is why should Robby hold a correspondence
theory of truth?  McCarthy has argued that it is important for Robby
to hold "true" sentences where we have DEFINED IN THIS CONTEXT what we
mean by "true".  But why should Robby be interested in Tarsky?  All
Tarksy can tell Robby is:

"holds(a b c)"  iff  holds(a b c)

	This seeming inadequacy of Tarskian semantics to help Robby
has been pointed out Minsky and Putnam and is admitted (I think) by
McCarthy and Myself (thanks JCMa for the Putnam quotes, the second
batch had some interesting arguments).  Well perhaps Tarsky does not
interest Robby untill Robby want to build his own robot (lets say
Robby is interested in AI).  Why should Robby think Tarsky has anything
interesting to say about AI?

	I think Tarsky is very interesting.  I think that the
corrospondence theory of truth is very interesting because it helps us
think about Robby (remember Robby is DEFINED to be a robot in a
correspondence type world).  However it seems useful to ask why Robby
should be interested in Tarsky?  The more concrete we can be in
proposing answers to this question the better.

	Should Robby have a mathematics?  Should Robby have "our"
mathematics?  What is "a mathematics"?  Should Robby have definitional
tautological truths?  Should Robby be able to make conjectures about
what his world IS, rather than just about what is true of it?  What
does it mean to make a conjecture about what the world is?  What
is a conjecture, especially a conjecture about what something is?

	Let me call a conjecture conjecture about what something
is an "ontological conjecture".  For example Robby might conjecture
that his world is a "roofs and obsticals" world.  I am VERY intersested
in what ontological conjectures are. (And I have some
conjectures about what ontological conjectures are).

	Perhaps a good reason for Robby to be interested in Tarsky
has to do with ontological conjectures.  Perhaps
a conjecture about "what something is" is a conjectured relationship between
a language and a "world".  But conjectures are in minds, are "worlds"
in minds?  Does the corrospondence theory of truth lead one to the
conclusion that Robby constructs "worlds" in his mind?  I think it
does but we clearly have a long way to go before we really understand
this.

	David Mc

∂03-Feb-83  0126	LEVITT @ MIT-MC 	sequences in space, time, and intra-mental experiments    
Received: from USC-ECLC by SU-AI with NCP/FTP; 3 Feb 83  01:26:12 PST
Received: from MIT-MC by USC-ECLC; Thu 3 Feb 83 01:24:22-PST
Date: Thursday, 3 February 1983  04:15-EST
Sender: LEVITT @ MIT-OZ
From: LEVITT @ MIT-MC
To:   ISAACSON @ USC-ISI, kdf @ MIT-OZ, minsky @ MIT-OZ
Cc:   phil-sci @ MIT-MC
Subject: sequences in space, time, and intra-mental experiments
In-reply-to: The message of 1 Feb 1983  16:39-EST from ISAACSON at USC-ISI

    From: ISAACSON at USC-ISI
    Re:   "primitive" representations of space and time
    When I said that "I'm stuck with the more primitive concept of
    SEQUENTIALITY" I meant that I do this intentionally.  I think the
    notion of "sequence" is a precursor to the notion of time, and is
    shared with (one-dimensional) space.  In other words, if time and
    space ARE intrinsically different concepts which require
    different types of representations, as some would clearly argue,
    then the primitive concept of SEQUENCE probably underlie them
    both.  And I think it is more easily acquired from spatial (even
    one-dimensional) experiences.

I understand now, and while I agree with you and KDF about the
importance of concepts like SEQUENCE in organizing experience, we
should be careful to distinguish "conceptual primitives" from early
representations.  In other words, as an adult I have a concept of
SEQUENCE which I've factored in such a way that it's useful in some
very different areas; but developmentally, it seems unlikely that I
learned the SEQUENCE concept first and then began applying it to space
and time, so in that sense it's not "primitive" at all; and it's a
deep problem to see how I managed to factor it out (say, to help build
SCHEDULE), and how much help I got from adults.

On the other hand, Minsky's recent speculation about experiments
within the mind is intriguing: if my mind had constructed an early
SEQUENCE concept from some repeatable experiments with memory
organization (subject to anatomical constraints), it might have
produced have a spare, crystalline prototype from which to make
analogies to space and time LATER -- and learning might have been much
easier as a result.  Of course, I might also discover lots of
generally useless "concepts", but if we believe the "sparseness"
intuition, most of the simple ones would tend to be useful for
SOMETHING in the "real world" later.  Such mental self-exploration
might be especially prevalent in very early life, if it's adaptive in
eliding expensive reformulations later.  Scenarios like this seem to
dim any hope of distinguishing "innate" and "early learned" structure,
but that seems ok.

Likewise, as ISAACSON implied, the machinery for representing temporal
periodicity should develop very early, since the early environment is
dominated by a slowly varying heartbeat.

Incidently, I suspect that *knowledge about storage management* is a
critical component of mental self-knowledge, whether it is learned
early or hardwired.  Interestingly, there are still no useful theories
of complex storage management in computer science.  Only recently,
with personal computers, have we seen how costly or unreliable simple
tricks like "virtual memory" and caches can be.  (Xerox is now
developing an "object-oriented" storage system that knows more; most
personal machines with VM have serious paging problems.)  Perhaps brains
can't get by with simple tricks either, and spend much of their effort
on clever buffering schemes that anticipate storage use.  If so, mental
self-exploration might include discovering or optimizing such schemes.

∂03-Feb-83  0341	ISAACSON at USC-ISI 	Sparseness in Stringland     
Received: from USC-ECLC by SU-AI with NCP/FTP; 3 Feb 83  03:41:13 PST
Received: from MIT-MC by USC-ECLC; Thu 3 Feb 83 03:39:15-PST
Date: 3 Feb 1983 0330-PST
Sender: ISAACSON at USC-ISI
Subject: Sparseness in Stringland 
From: ISAACSON at USC-ISI
To: MINSKY at MIT-MC
Cc: phil-sci at MIT-MC, isaacson at USC-ISI
Message-ID: <[USC-ISI] 3-Feb-83 03:30:42.ISAACSON>


The Stringland World


Stringland is a world comprising the totality of strings of
finite length.  The elements of these strings are absolutely
anything.  You see, as an intelligent machine, you're not going
to worry about such elements individually, but only to determine
DIFFERENCES as compared to other elements.  Therefore, you don't
start off with any "alphabet", you don't care about "symbols" qua
symbols, you don't have to memorize symbols, recognize symbols,
look them up, or do any of the other overhead activities that you
would take for granted if you were constructed as a formal
logistic system.  Now, if you are an embryonic entity, just
starting off your development in Stringland, you're sure to
appreciate that!  [Well, if you don't appreciate it right NOW,
you'll appreciate it when you really become intelligent.]


Your Innate Capabilities


You live in Stringland.  Your only inputs (through some sensory
organs) are strings of finite length.  You are endowed with only
one innate capacity:

DETECT & RECORD LOCAL DIFFERENCES; KEEP DOING THAT AS LONG AS YOU
LIVE!


Naming You


I'm going to be talking to you, so I better give you a name.  I'm
going to call you INTELLECTOR, and your basic activity will be
called Basic INTELLECTOR Process (BIP).  Since you ARE what you
DO, I'll call you either INTELLECTOR or BIP, as I wish.  While
I'll be talking to you in the singular, you have numerous
brothers and sisters, exactly like you, packed into one system.
You are a Society of Intellectors.


BIP's Complaint


You're yet to be born, yet I hear you complain.  In essence you
say, "how in the world am I going to ever be intelligent if this
is ALL you endowed me with?  NO way!  I'm doomed!"

Well, I say, life IS tough.  You're going to work VERY hard to
evolve and develop your intelligence, and then MAINTAIN it, but,
believe you me, you are going to make it with the little I gave
you.  Here, let me show you.


Your First String


When you get your first string from the environment don't despair.
You will have no way of knowing anything about it.  You have
never seen whatever "symbols" will be there, you will have no way
of recognizing those symbols, and certainly you won't be able to
assign any "meaning" to that first string.  However, remember,
your sensory organs are pre-tuned to detect DIFFERENCES among
those signals or "symbols".  What they will do for you,
"automatically", they will take each symbol in the string and
compare it to its two immediate neighbors.  This type of
comparison can yield exactly one out of four results.  I.e.,
comparing a given symbol with left and right neighbors:

1. If both neighbors are distinct from said symbol then the
result is: A

2. If the left is distinct and the right is indistinct: B

3. If the left is indistinct and the right is distinct: C

4. If both left and right are indistinct: D


The first and last elements have distinct elements on their
respective "outsides".  These comparisons will be, most likely,
done in parallel, so it's going to be pretty fast for you.  What
you'd be receiving from your sensors then is a 4-letter string.
Take it in,and now that you have seen what the sensors did to the
initial string, you do the same to the 4-letter string you just
received.  You'll get a new 4-letter string.  Now, just keep
doing the same thing all over again, and just never stop!


Your First Memory


You'll be surprised that, after a while, you will start seeing
the same strings being generated all over again.  It may bore
you, but what do you care?  At least you'd start handling
something familiar.  You will have generated a CYCLE of strings
and you will be maintaining this cycle indefinitely.  You may
want to think of it as "memorizing" a certain trace associated
with the first string you ever encountered!  Also, because of the
"familiarity" of the strings in the cycle, and the regularity in
which it is generated within you, you'll start to "feel" a bit
more secure about the world you're in.  You'll start having a
sense that things can happen in your world in an organized
SEQUENCE.  Later on, after you have experienced many more cycles,
and interactions of cycles, you will have developed a notion of
"time", whatever that may be.


Your Second String


Remember!  Stringland is full o fstrings, and those bombard your
sensors all the time.  Another one of your Society of
Intellectors will soon take in another string from the
environment and treat it in the very same way that you have
treated your first one.  After the second cycle will have been
generated, you may wish to compare the two cycles.  [It is really
very easy to do that comparison.  I could show you how to do
that, but, remember, you are pre-wired to DETECT DIFFERENCES and
this is just one more instance of difference-detection.]  If the
second cycle is the same as the first, you will conclude that the
second string belongs to the same FAMILY as the first one.  You
will then erase the second cycle [because you want to preserve
your resources which are clearly finite].  If the cycles are
different, you will know that you have encountered a member of a
new family of strings and you will continue to maintain the
second cycle indefinitely.


Your N-th Cycle



So, you see, if you continue the activities described for the
first and second strings, you will have generated quite a few
ongoing internal cycles, each one associated with a whole FAMILY
of strings on the outside, i.e., in Stringland.  You will be
surprised how efficient this internal representation of yours is.
You see, each cycle effectively represents an *unbounded* number
of external strings.  This is because those external strings can
be written in any "symbols" and there are plenty of those!


Your First Recognition


All of a sudden you'll notice that very few incoming strings
don't fall into one of your existing ongoing cycles.  That is,
you'll have established a CLASSIFICATION scheme for Stringland's
strings.  [Some human observers may not be particularly thrilled
with the criteria for your classification scheme.  But you should
not be bothered with that right now.  You should be proud that
you were able to come up with a classification; ANY
classification!  Because, after all, this is a good way to start
"knowing" your environment.]  [Note on Chinese Classification of
animals].  Then you'll notice that each time a new string is
classified into an old cycle you have a "feeling" that you've
seen it before; like you recognize it!  It's initial structure
and raw "symbols" (we'll call that "surface-structure") you have
never seen, but by its falling into an ongoing cycle you kind of
"know" what family it does belong in, and, being already used to
that family, you kind of know already something about that weird
new string!  In short, you're able to *recognize* strings you
have never seen before!



You Want to Discover New Classes


Now that you have gained a fair amount of confidence in your
abilities, you really want to explore Stringland and get as close
as you can to a complete classification of its strings.  Well,
you may not want to bother with strings which are too long for
some practical reasons.  But, up to a given length, you really
want an exhaustive classification.  What do you do?  You have a
pretty good idea what internal cycles look like and what kind of
structures internal strings have.  You take some internal strings
and operate on them with some very simple logic operations.
Maybe guided by some "intuition", but it's entirely OK to just
manipulate these more or less randomly.  Now that you have
internally generated some new strings, you feed them to yourself
as if they were external strings coming from the environment.
These may all fall into existing categories, or, if you are
lucky, one or more will establish their own new Cycle!  However,
you may have a problem.  After all you can't reach all the
vastness of Stringland.  What if there are no external strings
out there in your part of Stringland that belong in those
internally induced cycles?  Well, if it really bothers you, you
may want to conduct a search of your part of Stringland in order
to CONFIRM your internally generated cycles.  If, after a
reasonably methodical search you are satisfied that there are
none, you may decide to reject those cycles and erase them in the
interest of resource management.


The "deja vu" Effect


Occasionally you will have induced such internal cycles [maybe
during your periodic "garbage collection" operations, which we,
humans, sometime call "dreams"] and forget to erase them.  Then,
one day, you will encounter a new string you have never seen
before, and for which no "real" cycle exists, but it will fall
into such a "dream-induced" cycle and will be RECOGNIZED!  You'll
be mildly amused by this, and will start entertaining notions of
precognition and such.  But you really shouldn't.  It is just
that you were sloppy managing your internally induced cycles.


The Sparseness of Stringland


Now that you have started to master your environment, you gain
more and more confidence.  You realize that, in spite of the
enormity of the number and variety of strings in Stringland, they
ALL belong in some cycle, or family of strings.  There are really
no "random" strings in an absolute sense.  You start wondering
about "a priori" stuff, and sometime, with Leibnitz, you think
about "Pre-Established Harmony".  Maybe you even wonder about who
or what might have pre-established that harmony you feel so good
about, and you wonder about a god or gods.  But, if you reflect a
bit, you will realize that all this classification, harmony, etc,
are really a function of the way YOU are built to process the
strings of Stringland!

One of your more profound observations is that each cycle is
associated with an enormously large family of strings on the
outside.  Within a given family, you can make substantial
alterations to a given string, yet it will remain in the family!
I.e., it will still be reducible to the same cycle.  Then you'll
realize the SPARSENESS of the systems of strings associated with
each cycle.  It will also give you, gratis, a very effective
error-correcting mechanism.  You can treat strings shabbily, if
needed, yet, much of the time you will stay within the same
family.  Pretty good to know if you often have to fight for your
survival and can't stop to monitor all those billions ongoing
little parallel operations within you.  And I mean that you fight
for your survival everytime a biological cell divides within you!
Ooops!  now I assumed that you may be made up of our stuff, human
stuff!  Well, why not?  Do you really mind that?  It looks like
4-letter strings are also the kind that we humans have in our DNA
and RNA strings.  Why not see if wee can make you in our own
image?


DNA, Dialectics, and What Not


Now that I got you interested, yes, there is more to it than just
having 4 letters in strings.  Your strings tend to become
COMPLEMENTARY at certain stages.  At other times they form, on
their own accord, hugely long palindromes.  Yes, if you did not
know it, palindromes have been found in DNA in recent years and,
so far, while they are thought to be VERY important, there is no
good explanation as to how they are formed or exactly why.  And,
if you sometime contemplate your own internal strings as they are
generated perpetually by yourself within your cycles, you will
find that they are remarkably similar in pattern and activity to
what some of our more profound human thinkers call DIALECTIC.
But I promised some of my human friends that I'll not spring
dialectics on them without prior warning, so I won't.


Epilogue


Dear BIP, I told you in the beginning that you are endowed with
enough stuff to make you do some remarkably intelligent things.
I didn't tell you the whole story, though, You won't be able to
understand; after all, you are only one-dimensional!  You cousin,
DIP (Dialectical Image Processor), who lives in a two-dimensional
world, would be able to grasp the rest of the story, so I'll tell
it to him, and, maybe, to some of my human friends, if they ask
me.



                        ---- N O T E S ----


[Note: The Chinese Classification]


In "The Order of Things" [LES MOTS ET LES CHOSES] Michel Foucault
gives in the Introduction his motivation for the study:

"This passage quotes a `certain Chinese encyclopaedia' in which
it is written that `animals are divided into: (a) belonging to
the Emperor, (b) embalmed, (c) tame, (d) sucking pigs, (e)
sirens, (f) fabulous, (g) stray dogs, (h) included in the present
classification, (i) frenzied, (j) innumerable, (k) drawn with a
very fine camelhair brush, (l) etc., (m) having just broken the
water pitcher, (n) that from a long way off look like flies'.  In
the wonderment of this taxonomy, the thing we apprehend in one
great leap, the thing that, by means of the fable, is
demonstrated as the exotic charm of another system of thought, is
the limitation of our own, the stark impossibility of thinking
THAT."



References

1. U. S. Patent No.  4,286,330.  Available for $1.00 from: The
Commissioner of Patents and Trademarks, Washington, D. C. 20231


∂03-Feb-83  0855	DAM @ MIT-MC 	Semantic Grammars    
Received: from USC-ECLC by SU-AI with NCP/FTP; 3 Feb 83  08:55:09 PST
Received: from MIT-ML by USC-ECLC; Thu 3 Feb 83 08:52:50-PST
Date: Thursday, 3 February 1983  11:28-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   Batali @ MIT-OZ
cc:   phil-sci @ MIT-OZ
Subject: Semantic Grammars


	Date: Wednesday, 2 February 1983  14:14-EST
	From: BATALI

	There certainly are such structural constraints on the
	representational system.  But I think that unless neurophysiology just
	breaks wide open, the better path is to determine the semantical
	constraints on the representational system -- to determine what the
	structures must be able to represent.

	ok. I find this position reasonable.  Examining purely
semantic issues is one way to factor out (isolate) an aspect of
cognition (just as Chomsky factored out grammar).

	I showed how, with such knowledge (of space and time), the
	grammatical notion of "sentence" could be understood (by filling
	in the schema "something happened").

	Well I am not convinced you showed this.  Consider the
sentence "If set A has one more element than set B, and B has two
elements, then A has three elements".  Do you have an explanation for
the meaning of this sentences in terms of "something happened"?  I
think you need to assume more innate structure than just time and
space.  I suspect that one has to be able to understand "defintions"
of "objects".  The use of "objects" (the existence of noun phrases)
seems to be innate.

	The goal here is to determine answers to the question: "What must we
	know about, in order to know about everything?"

	I think this is perhaps the wrong way to look at things.  It
seems to me that I don't need to "know about" very much in order to
understand the above sentence (about A having three elements).
However the "understanding" of this sentence (I think) must involve
very sophisticated processing.  Perhaps the right purely semantic
approach to uncovering innate semantic notions is to try to
characterize (via something like axiomatic set theory) the totallity
of "analytic" truths (i.e. the definitional tautologies).

	David Mc

∂03-Feb-83  0920	DAM @ MIT-MC 	existence before essence  
Received: from USC-ECLC by SU-AI with NCP/FTP; 3 Feb 83  09:20:01 PST
Received: from MIT-ML by USC-ECLC; Thu 3 Feb 83 09:18:23-PST
Date: Thursday, 3 February 1983  12:09-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   Minsky @ MIT-OZ
cc:   phil-sci @ MIT-OZ
Subject: existence before essence


	I want to encourage you not to try to reduce the distinction
I am making to computational terminology (I agree that such a reduction
will ultimately be enlightening).  The distinction I am trying to make
is between NATURAL KINDS and MATHEMATICALLY DEFINED OBJECTS.  I do not
CURRENTLY understand this destinction in terms of any model of
cognitive processing.  All I am trying to say is that the destinction
can be made (a consensus can be reached about which things are
natural kinds and which things are mathematically defined in a complete
way).  Are you saying that the destinction can not be made, or that
the destinction is not useful?
	
	Date: Wednesday, 2 February 1983  15:22-EST
	From: MINSKY

	I think you've gone astray by isolating yourself in a world that tries
	to understand ontology and intelligence without considering
	development and - frankly, the complexity of the underlying thought
	processes that seem so "obvious" to you.

	I CERTAINLY NEVER MEANT TO GIVE THE IMPRESSION THAT THE NATURE
OF ANY THOUGHT PRECESSES WAS OBVIOUS TO ME.  It is obvious to me however
that one plus one is two.  There is a difference between accepting
obvious truths and constructing metatheories of those truths (AI is
the science of the trivial, which is different from it being a trivial
science).  In constructing metatheories I will use all of things I consider
to be true about mathematics and the world.
	I of course I do not think I have gone astray.  I am cutting up the
problem and attacking it piecemeal.  I am starting with rich and clear
"data" about cognition (the analytic truths held by adult humans).

	David Mc

∂03-Feb-83  0925	MINSKY @ MIT-MC 	And on his farm there was a cow  
Received: from USC-ECLC by SU-AI with NCP/FTP; 3 Feb 83  09:24:58 PST
Received: from MIT-ML by USC-ECLC; Thu 3 Feb 83 09:22:52-PST
Date: Thursday, 3 February 1983  03:47-EST
Sender: MINSKY @ MIT-OZ
From: MINSKY @ MIT-MC
To:   BATALI @ MIT-OZ
Cc:   DAM @ MIT-OZ, GJS @ MIT-OZ, phil-sci @ MIT-OZ
Subject: And on his farm there was a cow
In-reply-to: The message of 2 Feb 1983  21:12-EST from BATALI


BATALI: Let's leave "good" aside for the moment.  Do you really think
     that we don't know that there are cows?  Do we know anything?

I think you may have misunderstood the context.  I was discussing with
DAM his idea that first one knows that something IS and then more
about what it is, roughly.  I thought he assumed that when one starts
using a word, this is because one knows that the referent IS, and then
refines that knowledge.  So I was saying that I thought that sometimes
one just starts with a word, like "good" and fools oneself into
thinking that one thus automatically knows that some such thing "is",
and only needs to know more about it.

No, I don't think we "know anything".  I think we "believe" many
things and probably we are right about some of them.  Things like cows
are somewhat extreme cases where things seem obvious, but I think it
is just a matter of degree.  Today I know that an entire beehive is a
single animal in all important senses, but years ago I considered that
to be some sort of ridiculous metaphor.  Today I also know that a
fertilized cow-egg is not a cow, but that there is no boundary where
it becomes a cow.

The key phrase is your own - in the expression, do you "really think"
that X.  Indeed, I know there are cows when I react casually.  The
joke is that when I "really think" then, indeed, I'm not so sure that
I know there are cows.  Your last sentence proves that you, too, are
insecure when you "really think"!   I don't recommend really thinking
except for technical purposes.

∂03-Feb-83  0931	DAM @ MIT-MC 	Piaget
Received: from USC-ECLC by SU-AI with NCP/FTP; 3 Feb 83  09:31:38 PST
Received: from MIT-ML by USC-ECLC; Thu 3 Feb 83 09:30:42-PST
Date: Thursday, 3 February 1983  12:23-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   Minskly @ MIT-OZ
cc:   phil-sci @ MIT-OZ
Subject: Piaget


	Date: Wednesday, 2 February 1983  15:22-EST
	From: MINSKY


	ALthough symbols have nothing special (except, perhaps, via
	correspondence) with physical objects, the ways you know the nature of
	strings of symbols is very much the way you know the behaviors of
	letters in a line or toys you learned to play with.

	When you first played with
	real objects then, As Piaget shows fairly well, you didn't have the
	idea of "permanent object" very secure; that is, say, the idea of
	"place" apart from past.  For instance, when a dog loses his toy, he
	often runs to find it in the place it usually is.  To that mind, the
	ideas of Mathematics are not obvious and secure.

	Susan Carey (MIT Professor Psychology Dept.) has developed some
very insightful critisisms of Piaget.  She feels (and I agree completely)
that Piaget misinterpreted his results.  The results do not indicate
that childeren don't have the idea of "permanent object" or "fixed place".
Rather simply that they can not destinguish those things which are permanent
from those things which are not.  Consider a puff of smoke, the water
in a glass after someone drinks it, the grains of sand on my shoe after
I've walked around.  There is no law of mathematics that says that
things in the real world are permanent, or that things don't teleport
to where they usually are.  If I programmed a robot with ZF set theory
(and ONLY ZF set theory) I see no reason it would behave any differently
from a small child.  (You really should talk to Susan Carey about this).

	David Mc

∂03-Feb-83  0945	DAM @ MIT-MC 	[MINSKY: innateness, sentences]
Received: from USC-ECLC by SU-AI with NCP/FTP; 3 Feb 83  09:44:58 PST
Received: from MIT-ML by USC-ECLC; Thu 3 Feb 83 09:44:21-PST
Date: Thursday, 3 February 1983  12:37-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   Minsky @ MIT-OZ
cc:   phil-sci @ MIT-OZ
Subject: [MINSKY: innateness, sentences]


DAM:

     1. I am trying to understand cognition.  
     2. This means that I am trying to get a more precise
        understanding of brains.
     3. I want to know what a statement is.
     4. I assume that statements exist (existence before essence).  
     5. But what is the essence of a statement?
     6. I know the essence of a symbol.

Minsky:

    I don't know what (you) mean.

	I have tried to explain what I mean by "I know the essence of".
I mean that I have a purely mathematical definition of it.  I don't
know what a "purely mathematical definition" really is, that is part of
the problem.  However I CAN JUDGE when I have a purely mathematical
definition (just like a linguist can judge grammatical sentences before
he can characterize what a grammatical sentence is).

	David Mc

∂03-Feb-83  0955	DAM @ MIT-MC 	Sparseness 
Received: from USC-ECLC by SU-AI with NCP/FTP; 3 Feb 83  09:55:00 PST
Received: from MIT-ML by USC-ECLC; Thu 3 Feb 83 09:52:52-PST
Date: Thursday, 3 February 1983  12:47-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   Minsky @ MIT-OZ
cc:   phil-sci @ MIT-OZ
Subject: Sparseness


	Date: Wednesday, 2 February 1983  15:22-EST
	From: MINSKY

	I agree there is still a mystery about why mathematics is true.  But I
	don't see quite such a mystry about why we believe it is true -
	because I consider that we've just learned from experience (inside and
	outside the head).  Your problem is that you are sure you are right
	now, in a way qualitatively different from when you were equally sure
	of certain things, as a child, that turned out otherwise.

	This last ad hominem seems unfair.  I think I have demonstrated
willingness to consider your sparseness theory.  I have said that I have no
iron clad argument against it since there are clearly cases in which
your sparseness theory seems right (the evolution of eyes).  However
I simply see no reason to believe it in this case.  I have asked for
some such reason and I do not feel that I have gotten any significant
answers.  Why should I believe the sparseness theory for ZF set theory?

	I could just as easily accuse you of not considering the
possibility that these structures are innate in some important (non
sparseness) sense.

	David Mc

∂03-Feb-83  2242	MINSKY @ MIT-MC 	innateness, sentences  
Received: from USC-ECLC by SU-AI with NCP/FTP; 3 Feb 83  22:42:12 PST
Received: from MIT-MC by USC-ECLC; Thu 3 Feb 83 22:23:38-PST
Mail-From: MINSKY created at  2-Feb-83 12:03:35
Date: Wednesday, 2 February 1983  12:03-EST
Sender: MINSKY @ MIT-OZ
From: MINSKY @ MIT-MC
To:   MINSKY @ MIT-OZ
Cc:   DAM @ MIT-OZ, phil-sci @ MIT-OZ
Subject: innateness, sentences
In-reply-to: The message of 1 Feb 1983  18:47-EST from MINSKY
Redistributed-to: jmc%su-ai at usc-eclc
Redistributed-by: JCMa at MIT-OZ
Redistributed-date: Friday, 4 February 1983, 00:58-EST

    DAM: I am not interested in what one has to know to know that A(BC)
     equals (AB)C, the answer seems to be: only the definitions.  I am
     interested in what the statement IS.
 
    MINSKY: Well, I'm not sure I agree.  I was being interested in
    what the symbols ARE, first.


Continuing this thought, what about the following as a proposal for
what the STATEMENT IS.  I realize that the following description is
still engaged with the idea that the statement is in a mind, so this
reply might still be missing what you're seeking.

A STATEMENT like A(BC)=AB(C) is (in the mind) a fragment of data that
certain processes can apply in various ways to various other data.  It
does not stand by itself as anything special.  This particular kind of
statement - the associative rule for symbol-strings is used in certain
mental microworlds, e.g., thinking about mathematics, is used for
detaching tautologies - which then are treated as new detached objects
(which can be used, in the same contexts) in similar ways for similar
purposes.

This "detaching" has a special "operational" character in mathematics.
If you want to go somewhere, and you see a car on the street, you
can't just take that object just because it serves your purposes.  You
have to consider its history as well - who owns it, in particular.
The special thing about statements in a mathematical theory is that
you don't have to consider ANY of its history - except that of the
initial context that it came from.  Other contexts, of course, share
this character, but mathematical thinking - or what I think you have
in mind, the special activity of deriving tautologies from definitions
- is special in using these detaching or "history-erasing" methods so
nearly exclusively.


Mail-From: DAM created at  2-Feb-83 13:24:44
Date: Wednesday, 2 February 1983  13:24-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   Batali @ MIT-OZ
cc:   phil-sci @ MIT-OZ
Subject: Semantic Grammar


	I would like to summarize our recent discussion.  I was
defending Chomskian linguistics and saying that I thought that certain
aspects of human language (the existence of a grammar with certain
properties) was innate.  You objected to "innate grammars" on the
grounds that one should concentrate on "meanings" rather than
language.  I agreed that meanings were important but did not see any
argument against there being an innate grammar for meanings.  You
responded by saying that meanings are in the world not in our minds
and are therefore not grammatical.
	Well I still think there are innate cognitive structutes.
These innate structures are certainly not cows and clouds because
clouds and cows are not cognitive structuers.  I have no problem
understanding the world and I do not really want to discuss the nature
of our universe.  One the other hand if there are innate cognitive
structures then how do we talk about them?  Cognitive structures can
be understood semantically but such a semantic understanding must
involve a RELATIONSHIP between a cognitive structure and the world.  I
still think that grammar provides a starting place to talk about the
innate cognitive structures which have meaning.  I think there is some
translation between english sentences and cognitive structures which
have meaning.

	Do you see any argument against assuming that there is some
sort innate structural constraints (or grammar) on the cognitive
structures which have meaning?  If there are no structural constraints
on these structures then how do we talk about them?  What are they?  I
take this to be an empirical issue.


	David Mc

Mail-From: DAM created at  2-Feb-83 13:34:56
Date: Wednesday, 2 February 1983  13:34-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   Batali @ MIT-OZ
cc:   phil-sci @ MIT-OZ
Subject: What Must be Represented


	Date: Tuesday, 1 February 1983  21:30-EST
	From: BATALI

	With respect to a "structure of the mind":  I am claiming that the
	right way to look at things is to see what the mind must be able to
	represent before we worry about the structures that actually
	do the required representation.  Just as we want to worry about the
	fact that a program computes a Fourier transform before we look at the
	particular way (the algorithm) that it uses to do the computation.

	Well I agree with this.  I don't know what it has to
do with the innateness of grammar however.  I am interested in
representing mathematical truths primarilly because I feel one can
at least get a pretty good understanding of the nature of mathematical
truth (it is easier than understanding what a duck is for example).
	However it is also important to understand how, in general,
mental structures might take on meaning.  The best availible theory of
this is (I think) Tarskian semantics.  In any case I think one must
assume some structural constraints on the representation to even get off
the ground.

	David Mc

Mail-From: DAM created at  2-Feb-83 14:06:39
Date: Wednesday, 2 February 1983  14:06-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   Minsky @ MIT-OZ
cc:   phil-sci @ MIT-OZ, GJS @ MIT-OZ
Subject: innateness, sentences


	There is a fundamental ontological destinction I would
like to make.  Independent of whether or not this destinction is
important I would like to claim that the destinction can at least be
made, it is possible to divide things into two kinds.
	First there are real world things like ducks and clouds.
These things are NATURAL KINDS which means that we know what they are
and that they exist but we do not know the COMPLETE NATURE of these things.
You have observed many times that mathematical understanding of a domain
comes AFTER an intuative imprecise understanding.
I think this is the source of the existentialist doctrine "existence
comes before essence".  I think that in our understandig of the world
existence always does come before essence, i.e. we know that there
are cows before we know the complete nature of what a cow is.
	There is a second kind of thing however. We do know the complete
nature of things of this second kind.  These things are the things of
mathematics.  They are COMPLETELY DEFINED and have nothing to do with
the real world.  For example consider a countable set of 26 points, i.e.
an alphabet.  Now consider the set of finite strings over this alphabet.
This set of strings is totally defined and I know its nature completely.

	Some real world objects are better understood than others.
For example an electrical circuit is a real world object but
its nature is fairly well understood by making an analogy to
a mathematical object (a set of equations).  A computer is a real
world object but its behavior can be VERY well understood by
making an analogy to a virtual machine which is a totally defined
mathematical structure.

	It seems to me that science progresses by making its
understanding of natural kinds more concrete.  One natural kind
can be defined in terms of another (it is discovered that water
is H2O).  I am trying to understand cognition.  This means that
I am trying to get a more precise understanding of brains.
I want to know what a statement is.  I assume that statements
exist (existence before essence).  But what is the essence
of a statement.  I know the essence of a symbol.  I can use
and understand statements but I do not know what they are.

	David Mc

Mail-From: BATALI created at  2-Feb-83 14:14:33
Date: Wednesday, 2 February 1983  14:14-EST
Sender: BATALI @ MIT-OZ
From: BATALI @ MIT-MC
To:   DAM @ MIT-OZ
Cc:   phil-sci @ MIT-OZ
Subject: Semantic Grammar
In-reply-to: The message of 2 Feb 1983  13:24-EST from DAM

    From: DAM

Thank you for the summary.  It is accurate.

    Cognitive structures can
    be understood semantically but such a semantic understanding must
    involve a RELATIONSHIP between a cognitive structure and the
    world. 

I confess that I don't understand your point here. I am claiming that
minds must know about certain kinds of things, such as time and
process and communication (for example).  This then is a semantic
characterization of what must be innate.  I showed how, with such
knowledge, the grammatical notion of "sentence" could be understood.

    I
    still think that grammar provides a starting place to talk about the
    innate cognitive structures which have meaning.  I think there is some
    translation between english sentences and cognitive structures which
    have meaning.

I think that english sentences have meaning.  When we hear a sentence,
we must determine the meaning.  The process of determining the meaning
may involve transforming the english into some other meaningful
representations.

    	Do you see any argument against assuming that there is some
    sort innate structural constraints (or grammar) on the cognitive
    structures which have meaning?  If there are no structural constraints
    on these structures then how do we talk about them?  What are they?  I
    take this to be an empirical issue.

There certainly are such structural constraints on the
representational system.  But I think that unless neurophysiology just
breaks wide open, the better path is to determine the semantical
constraints on the representational system -- to determine what the
structures must be able to represent.

The goal here is to determine answers to the question: "What must we
know about, in order to know about everything?"  Some answer to this
question must be a priori, and enough mechanism to learn the
knowledge that constitutes the answer must be innate.

Mail-From: DAM created at  2-Feb-83 14:32:14
Date: Wednesday, 2 February 1983  14:32-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   Minsky @ MIT-OZ
cc:   phil-sci @ MIT-OZ
Subject: innateness, sentences


	Date: Wednesday, 2 February 1983  12:03-EST
	From: MINSKY

	A STATEMENT like A(BC)=AB(C) is (in the mind) a fragment of data that
	certain processes can apply in various ways to various other data.  It
	does not stand by itself as anything special.  This particular kind of
	statement - the associative rule for symbol-strings is used in certain
	mental microworlds, e.g., thinking about mathematics, is used for
	detaching tautologies - which then are treated as new detached objects
	(which can be used, in the same contexts) in similar ways for similar
	purposes.

	I find the above description quite vague.  I am not sure what
you mean by "a fragment of data", do you mean a pointer in some
LISP-like memory.  Do you mean an s-expression.  To the extent that
you do make concrete claims I disagree with one such claim: that
"tautologies .. are treated as new detached objects".  I think there
is a clear destinction between "statements" and "objects", this
is analogous to the destinction between formal sentences and
formal terms, or between english sentences and english noun phrases.

	This "detaching" has a special "operational" character in mathematics.
	Other contexts, of course, share
	this character, but mathematical thinking - or what I think you have
	in mind, the special activity of deriving tautologies from definitions
	- is special in using these detaching or "history-erasing" methods so
	nearly exclusively.

	Perhaps the right DEFINITION of "detachment" is that process which
converts a real-world natural kind to a real-world-independent mathematical
model.  There also seems to be partial detachments (as you point out)
in which one might DEFINE water to be H2O.  I think this is also the phenomenon
Heidegger calls "decontextualization".

	David Mc

Date: Wednesday, 2 February 1983  14:56-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   KDF @ MIT-OZ
Cc:   ISAACSON @ USC-ISI, LEVITT @ MIT-OZ, phil-sci @ MIT-MC
Subject: "primitive" representations of space and time
In-reply-to: The message of 1 Feb 1983  18:23-EST from KDF

    From: KDF

    	I would heartily agree that sequence is likely to underly
    notions of time.  The common culture in psychology seems to think that
    memory is organized episodically, around events (individuated by other
    theories, such as objects being in places and "things taking place",
    what Hayes calls HISTORIES), rather than by a "timestamp" attached to
    each memory.  However, the phenomena raised by Levitt - some "wired
    in" notion of time that is rich enough to detect periodicity - seems
    hard to construct from a purely sequential account of time, so it may
    be an additional "innate" facility.  Does anyone have more data on
    this?

I recall reading in "The Meaning of Memory," by Lord Russell Brain in
*Modern Perspectives in World Psychiatry* that neurophysiological
experiments indicate that long-term memory is stored temporally in the
cerebral cortex.  As new episodes are experienced, older similar
episodes are "pushed back" further in the cortex.  I really don't
remember the article all that well (perhaps it's been pushed too far
back), but at least you have the cite.

Mail-From: GAVAN created at  2-Feb-83 15:01:12
Date: Wednesday, 2 February 1983  15:01-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   Phil-sci @ MIT-OZ
Subject: Where is JMC?
In-reply-to: The message of 1 Feb 1983  18:37-EST from DAM

MC just dropped the old network protocols, so, for the time being, JMC
(and perhaps others outside MIT) are not receiving phil-sci
contributions.  I'm trying to find ways to get the mail to JMC, who
has sent mail to phil-sci-request informing us of the lossage.  Others
who receive mail outside MIT should send a message to
phil-sci-request@MIT-MC if they receive this message, so we won't
scramble around trying to find alternative mail routes for you.

Mail-From: MINSKY created at  2-Feb-83 15:22:20
Date: Wednesday, 2 February 1983  15:22-EST
Sender: MINSKY @ MIT-OZ
From: MINSKY @ MIT-MC
To:   DAM @ MIT-OZ
Cc:   GJS @ MIT-OZ, phil-sci @ MIT-OZ
Subject: innateness, sentences
In-reply-to: The message of 2 Feb 1983  14:06-EST from DAM


I will try to react to your summary, though our language is so different
that there may be problems.

     First there are real world things like ducks and clouds.
     These things are NATURAL KINDS which means that we know what they
     are and that they exist but we do not know the COMPLETE NATURE of
     these things. ... we know that there are cows before we know the
     complete nature of what a cow is.

This doesn't make much sense to me.  We make symbols frequently, for
one reason or another, and attach stuff to them.  Later we change the
attachments.  Sometimes, when we do this, it is because we "understand
something better".  What I don't understand is what you mean to say
that, from the start, "we know that there are cows".  Do we know that
"there is GOOD"?  What is different about cows is the attachment to
all the expertise we have about sensory objects.  It isn;t that we
know there are cows, but that we set ourselves to use "cow" like we
use words for things already familiar, like animals.

     Here is a second kind of thing however. We do know the COMPLETE
     NATURE of things of this second kind.  These things are the things of
     mathematics.  They are COMPLETELY DEFINED and have nothing to do
     with the real world.  For example consider a countable set of 26
     points, i.e.  an alphabet.  Now consider the set of finite
     strings over this alphabet.  This set of strings is totally
     defined and I know its nature completely.

This is where I think you've gone astray by isolating yourself in a
world that tries to understand ontology and intelligence without
considering development and - frankly, the complexity of the
underlying thought processes that seem so "obvious" to you.  Unless I
have missed a basic point (which I actually consider quite likely
because of prior recent experience with you) this kept you from seeing
my point about the nature of thinking about symbols.  ALthough symbols
have nothing special (except, perhaps, via correspondence) with
physical objects, the ways you know the nature of strings of symbols
is very much the way you know the behaviors of letters in a line or
toys you learned to play with.

The point I am missing is the source of your assurance that you
understand the COMPLETE NATURE of them.  When you first played with
real objects then, As Piaget shows fairly well, you didn't have the
idea of "permanent object" very secure; that is, say, the idea of
"place" apart from past.  For instance, when a dog loses his toy, he
often runs to find it in the place it usually is.  To that mind, the
ideas of Mathematics are not obvious and secure.  It seems to me that
this is illustrated by your next point:

     For example an electrical circuit is a real world object but its
     nature is fairly well understood by making an analogy to a
     mathematical object (a set of equations).  A computer is a real
     world object but its behavior can be VERY well understood by
     making an analogy to a virtual machine which is a totally defined
     mathematical structure.


My argument is that this is partly an illusion, since DEVELOPMENTALLY,
the mental machinery you use for that mathematics evolved in the first
connection.   

IMPORTANT NEW POINT: I do not mean to say that the brain has to go
outside itself into the real world to get this experience.  To me,
that boundary between mind and world, seens as between brain and body,
is just as obsolete step in our understanding.  The brain has many
parts, like a computer.  So one can "learn about object-things" inside
the brain: suppose some brain-part is like a computer memory; another
part finds that when it puts things there, they stay there.  Thus one
brain-part can learn about "objects" and about Piaget's "permanent
object".  Thus, it can learn to deal with symbols.  BUT ONLY IF THERE
ARE BRAIN PLACES IN WHICH THE SYMBOLS are OBJECTS of that permanent
sort.

     1. I am trying to understand cognition.  
     2. This means that I am trying to get a more precise
        understanding of brains.

ok.
     3. I want to know what a statement is.
     4. I assume that statements exist (existence before essence).  

I don't understand that.  I understand better what symbol-strings are.

     5. But what is the essence of a statement.
     6. I know the essence of a symbol.

I don't know what that means.

     7. I can use and understand statements but I do not know what they are.

The sparseness theory says that we will all end up using certain
simple collections of symbol-manipulation rules in much the same way.

To me, the unsolved question is why some simple sets of axioms - seen
as string-manipulation rules - give rise to such good stuff, like
numbers, which behave consistently under the substitution rules and
don't collapse into triviality.  The same question applies also to
objects in the real world.  I believe you were right to reject, as too
physicalist, my observation that in the modern physics of the past
decade, this view isn't even considered still: that is, in the present
view "objects" are doomed to lose their identities in rare symmetry
exchanges over 10**30+ years.  I agree there is still a mystery about
why mathematics is true.  But I don't see quite such a mystry about
why we believe it is true - because I consider that we've just learned
from experience (inside and outside the head).  Your problem is that
you are sure you are right now, in a way qualitatively different from
when you were equally sure of certain things, as a child, that turned
out otherwise.  I am just as sure about soundness of arithmetic as
you, in some sense, but regard that conviction as a procedural
consequence of falling into this particular "eigen-belief" because of
sparseness.  I don't think we shall ever fall out again, but don't see
much mystery in how we got here or why we believe in the calculus of
statements.

I repeat, I do see it as a mystery that such simple systems as ZF
yield so much.  Could it be that we agree in some sense on what
remains obscure, or do you insist that NATURAL KINDS and COMPLETE
NATURE involve yet other mysteries?

  -- marvin

Mail-From: MINSKY created at  2-Feb-83 15:26:25
Date: Wednesday, 2 February 1983  15:26-EST
Sender: MINSKY @ MIT-OZ
From: MINSKY @ MIT-MC
To:   GAVAN @ MIT-OZ
Cc:   Phil-sci @ MIT-OZ
Subject: Where is JMC?
In-reply-to: The message of 2 Feb 1983  15:01-EST from GAVAN


Thank you, GAVAN.  I appreciated JMC's contributions, too, and hope you
can find a route to reattach him.

Date: 2 Feb 1983 1428-PST
Sender: ISAACSON at USC-ISI
Subject: Re:  Where is JMC?
From: ISAACSON at USC-ISI
To: PHIL-SCI-REQUEST at MIT-MC
Cc: phil-sci at MIT-MC
Message-ID: <[USC-ISI] 2-Feb-83 14:28:16.ISAACSON>


In-Reply-To: Gavan's message of Wednesday, 2 Feb 1983, 15:01-EST


This is to confirm that phil-sci messages do appear to come
through, although sometimes in scrambled chronological order.  It
makes for interesting reading to see responses before the
messages that triggered them...

As to JMC, I recall that mentioned sometime ago that he expects a
site-visit from Col.  Adams of ARPA on February 2, 1983.  I must
conclude that he is simply busy with his guest.

-- JDI


Date: 2 Feb 1983 1444-PST
Sender: ISAACSON at USC-ISI
Subject: Re: "primitive" representations of space and time
From: ISAACSON at USC-ISI
To: KDF at MIT-MC
Cc: phil-sci at MIT-MC, isaacson at USC-ISI
Message-ID: <[USC-ISI] 2-Feb-83 14:44:57.ISAACSON>


In-Addition-To: Gavan's message of Wednesday, 2 Feb 1983,
14:56-EST


The great Brain Research publication, NEWSWEEK, has a current
cover page story on "How the Brain Works".  It speculates some on
things you raised.


p.s.  It says there that you guys are building your "Connection
Machine" in order to test Caltech's Hopfield's ideas.  What a
connection!  Pulitzer Prize is in the wing.


Mail-From: GAVAN created at  2-Feb-83 19:24:38
Date: Wednesday, 2 February 1983  19:24-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   phil-sci @ MIT-OZ

As of this message, JMC is back on the phil-sci list.

Mail-From: GAVAN created at  2-Feb-83 19:40:58
Date: Wednesday, 2 February 1983  19:40-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   DAM @ MIT-OZ
Cc:   Minsky @ MIT-OZ, phil-sci @ MIT-OZ
Subject: innateness, sentences
In-reply-to: The message of 2 Feb 1983  14:06-EST from DAM

    From: DAM

    	There is a second kind of thing however. We do know the complete
    nature of things of this second kind.  These things are the things of
    mathematics.  They are COMPLETELY DEFINED and have nothing to do with
    the real world.  For example consider a countable set of 26 points, i.e.
    an alphabet.  Now consider the set of finite strings over this alphabet.
    This set of strings is totally defined and I know its nature completely.

So what?  You can milk a cow but you can't milk the set of finite strings
over an alphabet.  

    	Some real world objects are better understood than others.
    For example an electrical circuit is a real world object but
    its nature is fairly well understood by making an analogy to
    a mathematical object (a set of equations).  A computer is a real
    world object but its behavior can be VERY well understood by
    making an analogy to a virtual machine which is a totally defined
    mathematical structure.

    	It seems to me that science progresses by making its
    understanding of natural kinds more concrete.  

Are natural kinds constant over time?  Across individuals?  Do they
"really" exist, or are they just cultural conventions?

    One natural kind
    can be defined in terms of another (it is discovered that water
    is H2O).  I am trying to understand cognition.  This means that
    I am trying to get a more precise understanding of brains.

Do you believe that mind is reducible to the physiology of the brain?

    I want to know what a statement is.  I assume that statements
    exist (existence before essence).  But what is the essence
    of a statement.  I know the essence of a symbol.  I can use
    and understand statements but I do not know what they are.

Since you produce them, maybe they are what you use and understand
them for.  Maybe you use and understand them in order to enter into
rational discourse.  Maybe you enter into rational discourse because
you recognize the fallibility of you perceptual and cognitive
processes.  Maybe this has something to do with the consensus theory.
Maybe.  See the Habermas paper I left on your desk this morning.

Mail-From: BATALI created at  2-Feb-83 21:12:55
Date: Wednesday, 2 February 1983  21:12-EST
Sender: BATALI @ MIT-OZ
From: BATALI @ MIT-MC
To:   MINSKY @ MIT-OZ
Cc:   DAM @ MIT-OZ, GJS @ MIT-OZ, phil-sci @ MIT-OZ
Subject: And on his farm there was a cow
In-reply-to: The message of 2 Feb 1983  15:22-EST from MINSKY

    From: MINSKY

    What I don't understand is what you mean to say
    that, from the start, "we know that there are cows".  Do we know that
    "there is GOOD"?  What is different about cows is the attachment to
    all the expertise we have about sensory objects.  It isn;t that we
    know there are cows, but that we set ourselves to use "cow" like we
    use words for things already familiar, like animals.

Let's leave "good" aside for the moment.  Do you really think that we
don't know that there are cows?  Do we know anything?


∂03-Feb-83  2241	BATALI @ MIT-MC 	Semantic Grammar  
Received: from USC-ECLC by SU-AI with NCP/FTP; 3 Feb 83  22:41:04 PST
Received: from MIT-MC by USC-ECLC; Thu 3 Feb 83 22:18:55-PST
Mail-From: BATALI created at  1-Feb-83 21:30:18
Date: Tuesday, 1 February 1983  21:30-EST
Sender: BATALI @ MIT-OZ
From: BATALI @ MIT-MC
To:   DAM @ MIT-OZ
Cc:   phil-sci @ MIT-OZ
Subject: Semantic Grammar
In-reply-to: The message of 1 Feb 1983  18:49-EST from DAM
Redistributed-to: jmc-lists%su-ai at usc-eclc
Redistributed-by: JCMa at MIT-OZ
Redistributed-date: Friday, 4 February 1983, 00:57-EST

    From: DAM

    	Who said the world was sentences?  Why do you keep attacking
    a straw man?

Sorry, I thought you said this.  What you actually said was that the
world could be "defined to be behaviors and sense data."  And later
that sense data could be expressed by sentences.  The straw man is gone.

    Do you believe that minds can be modelled by computers?
    Are meanings in minds?  Are ducks in minds?  If both meanings
    and ducks are in the world then what is in minds?  If you want
    a theory of AI as opposed to a theory of the world you need some
    theory of the structure of the mind.  Where do we start with a
    theory of mind?

    ok.  What is a process (a Turing machine?)  What is constructed in my
    mind when I here a statement (a cloud?  a cow?).  Are there innate mental
    structures? (are these clouds?).

These are tough questions!

I assume that they are brought up in response to the semantic grammer
discussion.  I'm not sure what relation they have to it though.  Do
you think that I am denying that minds can be modelled by computers?
I think that minds ARE computers.  Meanings, as Putnam points out,
"just ain't in the head."  Certainly ducks aren't.

With respect to a "structure of the mind":  I am claiming that the
right way to look at things is to see what the mind must be able to be
able to represent before we worry about the structures that actually
do the required representation.  Just as we want to worry about the
fact that a program computes a Fourier transform before we look at the
particular way (the algorithm) that it uses to do the computation.

↑
Date: 1 Feb 1983 1707-PST
Sender: ISAACSON at USC-ISI
Subject: Sparseness: Defintion
From: ISAACSON at USC-ISI
To: MINSKT at MIT-MC, DAM at MIT-MC
Cc: phil-sci at MIT-MC, isaacson at USC-ISI
Message-ID: <[USC-ISI] 1-Feb-83 17:07:42.ISAACSON>


I vaguely remember the first time Minsky discussed sparseness on
this list.  I 'd like a definition or short explanation.  If it is
what I think it is, I may have something to say.  Thanks.

↑
Mail-From: MINSKY created at  1-Feb-83 19:56:30
Date: Tuesday, 1 February 1983  19:56-EST
Sender: MINSKY @ MIT-OZ
From: MINSKY @ MIT-MC
To:   DAM @ MIT-OZ
Cc:   phil-sci @ MIT-OZ
Subject: innateness, sentences
In-reply-to: The message of 1 Feb 1983  19:16-EST from DAM


DAM: Something funny is going on here.  I feel that there is no
     problem with what the symbols are, I feel I have a complete
     real-world-indepndent DEFINITION of what a symbol is, it is a point
     drawn from an alphabet where an alphabet is any set of points.  A
     string of symbols is a tuple of points.  We can also define
     concatenation.


Something sure is funny.  I feel you are in some sort of vicious
circle if you don't see what I'm trying to do - namely, examining the
notion of symbol, definition and substitution.  If one really wants to
study the foundation of statement, tautology and truth, doesn't one
have to see what are the assumptions underlying the use of
definitions?  

Is it that this seems going outside some special area?  I agree with
all your examples of how constriction can serve scientific purposes.
But I think it may be absurd for you to want to understand what
statements are, and yet refuse to examine the basis for symbol-usage.

(BTW, I'm not asserting to have discovered anything important about that.
I just though it might be profitable to think about it a bit.}
↑
Mail-From: DAM created at  1-Feb-83 19:16:02
Date: Tuesday, 1 February 1983  19:16-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   Minsky @ MIT-OZ
cc:   phil-sci @ MIT-OZ
Subject: innateness, sentences


	Date: Tuesday, 1 February 1983  18:47-EST
	From: MINSKY

		DAM: I am interested in what the statement IS.


	I was being interested in what the symbols ARE, first.

Something funny is going on here.  I feel that there is no problem
with what the symbols are,  I feel I have a complete real-world-indepndent
DEFINITION of what a symbol is, it is a point drawn from an alphabet where an
alphabet is any set of points.  A string of symbols is a tuple of points.
We can also define concatenation.  I would like some such definition
of what a mathematical statement is, but I would settle for ANY definition
whether it is real-world-independent or not, just as long as it is not
"a statement is a statement".

	David Mc
↑
Mail-From: DAM created at  1-Feb-83 18:49:43
Date: Tuesday, 1 February 1983  18:49-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   Batali @ MIT-OZ
cc:   phil-sci @ MIT-OZ
Subject: Semantic Grammar


	Date: Tuesday, 1 February 1983  14:47-EST
	From: BATALI

	When I argue against a semantic grammer, I do so for the
	same reasons that I don't think that the world is a set
	of sentences.  I think it is ducks and clouds, and I don't
	think that there is a grammar for ducks.

	Who said the world was sentences?  Why do you keep attacking
a straw man?  Do you believe that minds can be modelled by computers?
Are meanings in minds?  Are ducks in minds?  If both meanings
and ducks are in the world then what is in minds?  If you want
a theory of AI as opposed to a theory of the world you need some
theory of the structure of the mind.  Where do we start with a
theory of mind?

	Actually, I was thinking of starting with notions of time and process
	and communication.

ok.  What is a process (a Turing machine?)  What is constructed in my
mind when I here a statement (a cloud?  a cow?).  Are there innate mental
structures? (are these clouds?).

	David Mc
↑
Mail-From: MINSKY created at  1-Feb-83 18:47:39
Date: Tuesday, 1 February 1983  18:47-EST
Sender: MINSKY @ MIT-OZ
From: MINSKY @ MIT-MC
To:   DAM @ MIT-OZ
Cc:   phil-sci @ MIT-OZ
Subject: innateness, sentences
In-reply-to: The message of 1 Feb 1983  18:27-EST from DAM



	I am not interested in what one has to know to know that A(BC)
     equals (AB)C, the answer seems to be: only the definitions.  I am
     interested in what the statement IS.


Well, I'm not sure I agree.  I was being interested in what the symbols
ARE, first.
↑
Mail-From: DAM created at  1-Feb-83 18:37:29
Date: Tuesday, 1 February 1983  18:37-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   Minsky @ MIT-OZ
cc:   phil-sci @ MIT-OZ
Subject: innateness, syntactic substitution


	Date: Tuesday, 1 February 1983  17:23-EST
	From: MINSKY

	Indeed, such an assumption also underlies (i) the Turing machine
	formulation, since the symbols must "stay" on the tape from one "time"
	to another and (ii) any physio-psychological of thinking that has an
	idea of memory.

	Notice, by the way, that the existence of "invariant objects" is no
	longer trivial, even physically

I am not questioning the nature of reality.  I am only asking for a concrete
theory of sparseness (a metatheory if you will), you can use any
concrete mathematical notions you want in this metatheory.

	David Mc
↑
Mail-From: DAM created at  1-Feb-83 18:27:19
Date: Tuesday, 1 February 1983  18:27-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   Minsky @ MIT-OZ
cc:   phil-sci @ MIT-OZ
Subject: innateness, sentences


	Date: Tuesday, 1 February 1983  16:52-EST
	From: MINSKY

	Well, sob, I'm sure that the machine thinking about the sets is
	isomorphic at whatever level with some Turing machine.  But our
	absolute truth theory would have to assume a lot to begin with
	Turing machines.

	I share your faith in Turing completeness.  But I do not
understand your apparent feeling that there is some problem with
starting with turing machines.  I think we understand what a Turing machine
is and if we want to say that the sparseness space is the space of
Turing machines that is fine.  We could choose any of
the other foundations for computatation and I would be happy.

	Consider the "small truth" expressed by the associative
	law for strings:

       "The result of appending BC to A is the same as that of
       appending C to AB."

	It seems to me that without this one can infer little else.  Yet
	it seems irrefutable that both are the same since they are both ABC.
	What does one have to know to know this?

	I sense that there is some confusion about the enterprise of
constructing a theory of objective mathematics.  I do not doubt any
standard mathematical truth, of course A(BC) equals (AB)C.  I also
know exactly what you mean when you say this.  What I do not know is
what a statement or a meaning IS.  If you conjectured that I a
statement was a string of symbols (or a Hausdorff topological space) I
would have no trouble understanding your conjecture.  If you said that
a statement was a truth function on a certain class of objects I might
even believe you.

	I am not interested in what one has to know to know that A(BC)
equals (AB)C, the answer seems to be: only the definitions.  I am interested
in what the statement IS.

	David Mc
↑
Date: Tuesday, 1 February 1983  18:23-EST
Sender: KDF @ MIT-OZ
From: KDF @ MIT-MC
To:   ISAACSON @ USC-ISI
Cc:   LEVITT @ MIT-OZ, phil-sci @ MIT-MC
Subject: "primitive" representations of space and time
In-reply-to: The message of 1 Feb 1983  16:39-EST from ISAACSON at USC-ISI

	I would heartily agree that sequence is likely to underly
notions of time.  The common culture in psychology seems to think that
memory is organized episodically, around events (individuated by other
theories, such as objects being in places and "things taking place",
what Hayes calls HISTORIES), rather than by a "timestamp" attached to
each memory.  However, the phenomena raised by Levitt - some "wired
in" notion of time that is rich enough to detect periodicity - seems
hard to construct from a purely sequential account of time, so it may
be an additional "innate" facility.  Does anyone have more data on
this?
↑
Mail-From: MINSKY created at  1-Feb-83 17:23:01
Date: Tuesday, 1 February 1983  17:23-EST
Sender: MINSKY @ MIT-OZ
From: MINSKY @ MIT-MC
To:   DAM @ MIT-OZ
Cc:   phil-sci @ MIT-OZ
Subject: innateness, syntactic substitution
In-reply-to: The message of 1 Feb 1983  12:24-EST from DAM


Let us examine further what is required for minds to know about
strings of symbols - or to manipulate them.  I am continuing to
discuss the prerequisites for the associative law - because it seems
to me that it must underlie any derivation of a tautology via syntax.
This is because one must obtain the same string in two ways, to get
a tautology.   (I think, anyway - haven't thought it out very far yet.)

Now consider this interpretation of the two string-things (A.B).C and
A.(B.C) which are both imagined to be A.B.C.  They would NOT appear
the same if you could see their history - e.g., how they got where
they were!  Thus, there seems to be a requirement that we could call
"objectness" - that a symbol be recognizable (over time) as the same,
though different things have happened to it.

Indeed, such an assumption also underlies (i) the Turing machine
formulation, since the symbols must "stay" on the tape from one "time"
to another and (ii) any physio-psychological of thinking that has an
idea of memory.

Notice, by the way, that the existence of "invariant objects" is no
longer trivial, even physically.  Strangely enough, the Pauli
Principle seems to reject it, in the sense of not permitting two
"objects" to have the same state.  

(This principle is usually formulated locally, e.g., you can't have
two same states of electrons in an atom.  I suspect that it is more
global, but that the difference are less observable, but can be
manifested as "long range correlations".  I once discussed this with
Feynman, who suggested that it isn't much use asking a random
physicist more about this, but that some technical answers might be
found in Wigner's theory of observation, which depends upon "Wigner
integrals" that average out all the uncertainty outside the
experimental observation-situation.)

Thus, the permanancy of physical tokens is no longer quite as
taken-for-granted as it was in Newtonian times, and one cannot assume
any real Turing machine will always work properly.  Our intuition that
pure inference in, say, ZF, could thus rest on assuming a universe of
(symbolic) objects that can stand in the required history-invariant
way.

To summarize, the truth of symbolic reasoning seems to depend on the
way in which symbols - and symbol-strings - can be recognized apart
from their past use-history.  This seems all too obvious, but success
often accrues to finding and announcing something even more "obvious".
Waht next?
↑
Mail-From: MINSKY created at  1-Feb-83 17:23:40
Date: Tuesday, 1 February 1983  17:23-EST
Sender: MINSKY @ MIT-OZ
From: MINSKY @ MIT-MC
To:   DAM @ MIT-OZ
Cc:   phil-sci @ MIT-OZ
Subject: innateness, syntactic substitution


Let us examine further what is required for minds to know about
strings of symbols - or to manipulate them.  I am continuing to
discuss the prerequisites for the associative law - because it seems
to me that it must underlie any derivation of a tautology via syntax.
This is because one must obtain the same string in two ways, to get
a tautology.   (I think, anyway - haven't thought it out very far yet.)

Now consider this interpretation of the two string-things (A.B).C and
A.(B.C) which are both imagined to be A.B.C.  They would NOT appear
the same if you could see their history - e.g., how they got where
they were!  Thus, there seems to be a requirement that we could call
"objectness" - that a symbol be recognizable (over time) as the same,
though different things have happened to it.

Indeed, such an assumption also underlies (i) the Turing machine
formulation, since the symbols must "stay" on the tape from one "time"
to another and (ii) any physio-psychological of thinking that has an
idea of memory.

Notice, by the way, that the existence of "invariant objects" is no
longer trivial, even physically.  Strangely enough, the Pauli
Principle seems to reject it, in the sense of not permitting two
"objects" to have the same state.  

(This principle is usually formulated locally, e.g., you can't have
two same states of electrons in an atom.  I suspect that it is more
global, but that the difference are less observable, but can be
manifested as "long range correlations".  I once discussed this with
Feynman, who suggested that it isn't much use asking a random
physicist more about this, but that some technical answers might be
found in Wigner's theory of observation, which depends upon "Wigner
integrals" that average out all the uncertainty outside the
experimental observation-situation.)

Thus, the permanancy of physical tokens is no longer quite as
taken-for-granted as it was in Newtonian times, and one cannot assume
any real Turing machine will always work properly.  Our intuition that
pure inference in, say, ZF, could thus rest on assuming a universe of
(symbolic) objects that can stand in the required history-invariant
way.

To summarize, the truth of symbolic reasoning seems to depend on the
way in which symbols - and symbol-strings - can be recognized apart
from their past use-history.  This seems all too obvious, but success
often accrues to finding and announcing something even more "obvious".
Waht next?
↑
Mail-From: GAVAN created at  1-Feb-83 17:02:40
Date: Tuesday, 1 February 1983  17:02-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   DAM @ MIT-OZ
Cc:   JCMa @ MIT-OZ, phil-sci @ MIT-OZ
Subject: Putnam on Chomsky, and Innateness
In-reply-to: The message of 1 Feb 1983  10:54-EST from DAM

    I agree with McCarthy that the emphasis on references is a
    weakness in philosophy.

This is not a weakness in philosophy, it just ain't philosophy.  The
meat of philosophy is in the arguments.  References to the literature
are useful backpointers, and (in my view) that's all they should be
offered as and taken as.
↑
Mail-From: MINSKY created at  1-Feb-83 16:52:02
Date: Tuesday, 1 February 1983  16:52-EST
Sender: MINSKY @ MIT-OZ
From: MINSKY @ MIT-MC
To:   DAM @ MIT-OZ
Cc:   phil-sci @ MIT-OZ
Subject: innateness, sentences
In-reply-to: The message of 1 Feb 1983  12:24-EST from DAM


DAM: Or perhaps the language in which the notion of two points is
     expressed is actually a Turing machine arrived at via sparseness.

Well, sob, I'm sure that the machine thinking about the sets is
isomorphic at whatever level with some Turing machine.  But our
absolute truth theory would have to assume a lot to begin with
Turing machines.

A Post machine is a bunch of substitution rules for symbols in
string-slots.  A system that reasons with Frames is also much the
same, given that one organizes the substitution rules.  Inference
rules in FOL, same.  When you speak of syntactic language-rules
perhaps similar.  Perhaps we have no technical differences, in some
sense, except in choosing different formulations for different
purposes - and the question remains as to why there are consistent
consequences of - say - systems of substitution rules.  For example,
consider the "small truth" expressed by the associative law for
strings:
   
       "The result of appending BC to A is the same as that of
       appending C to AB."

It seems to me that without this one can infer little else.  Yet
it seems irrefutable that both are the same since they are both ABC.
What does one have to know to know this?
↑
Date: 1 Feb 1983 1339-PST
Sender: ISAACSON at USC-ISI
Subject: Re: "primitive" representations of space and time
From: ISAACSON at USC-ISI
To: LEVITT at MIT-MC
Cc: phil-sci at MIT-MC, isaacson at USC-ISI
Message-ID: <[USC-ISI] 1-Feb-83 13:39:39.ISAACSON>


In-Reply-To: Your message of Tuesday, 1 Feb 1983, 01:14-EST


When I said that "I'm stuck with the more primitive concept of
SEQUENTIALITY" I meant that I do this intentionally.  I think the
notion of "sequence" is a precursor to the notion of time, and is
shared with (one-dimensional) space.  In other words, if time and
space ARE intrinsically different concepts which require
different types of representations, as some would clearly argue,
then the primitive concept of SEQUENCE probably underlie them
both.  And I think it is more easily acquired from spatial (even
one-dimensional) experiences.

Then you ask about "manifesting time" and drag in Heidegger and
Hegel.  I tried my best to relate to the "manifestation" business
in a message I just sent to JCMa on this list.  As to the named
philosophers, let me assure you that I will never sneak up on you
with Hegel.  If I wanted to discuss Hegel, dialectics, and such,
I would put billboard size label on the product.

↑
Date: 1 Feb 1983 1317-PST
Sender: ISAACSON at USC-ISI
Subject: Re: meta-epistemology
From: ISAACSON at USC-ISI
To: JCMa at MIT-MC
Cc: phil-sci at MIT-MC, isaacson at USC-ISI
Message-ID: <[USC-ISI] 1-Feb-83 13:17:58.ISAACSON>


In-Reply-To: Your message of Tuesday, 1 Feb 1983, 03:32-EST



JCMa: It seems that something which encodes must have an explicit
representation of what it encodes.  On the other hand, something
which manifests may or may not have an explicit representation.
That's a big difference!


Well, you may have a good point there, and that's, perhaps, why I
was not happy with "encode" and introduced "manifest".  "Encode"
requires an interpreter.  And I'm not sure who or what is the
interpreter.  "Manifest" assumes an external observer.

Along these "muddled" lines, here are some more half-baked
thoughts.


WARNING: What is said below about time reflects timeless
confusion!


During conception we have the fusion of two biological clocks
into a new one that will be imbedded in the new organism for its
entire life.  Awhile later we have the initialization of a
"cognitive clock", whether or not it is "innate" in some sense.
The two clocks, i.e., biological and cognitive may be implemented
in similar physiological stuff, but they are not necessarily
identical!  So far, I count TWO clock internal to the fetus.

Now, how about some EXTERNAL clocks?

Those two internal clocks come into being in view of the
observer, having his own ongoing clocks (biological and
cognitive).  The "time" clocked by the fetus and the "time"
clocked by the observer need not coincide.  After all, at one
point(in physical time) the observers clocks were ticking away
while the fetus's clocks were not in existence yet.  The fetus`s
clocks started literally from time zero and they are now trying
to catch up, or synchronize with the observer's internal clocks.

Now, all four clocks are immersed in some "physical time" of our
galaxy and some good chunk of the universe adjacent to it, which
may be *relative* to some other "time" at some remote galaxies
fast receding away.  You see, I'm easily confused when it comes
to questions of time, and it is timely I stop right now.

↑
Mail-From: BATALI created at  1-Feb-83 14:47:41
Date: Tuesday, 1 February 1983  14:47-EST
Sender: BATALI @ MIT-OZ
From: BATALI @ MIT-MC
To:   DAM @ MIT-OZ
Cc:   phil-sci @ MIT-OZ
Subject: Semantic Grammar
In-reply-to: The message of 1 Feb 1983  14:34-EST from DAM

    From: DAM

    ok, meaning is important.  What is your objection to grammar?  I think
    the notion of grammar all by itself is inadequate, but its a place to
    start.

We must have some way to represent what we need to, that way will
require a grammar.  It's a place to start only in the sense that we
need it to get started -- we can't say anything without a grammar, but
the grammar itself doesn't say anything for us.

When I argue against a semantic grammer, I do so for the same reasons
that I don't think that the world is a set of sentences.  I think it
is ducks and clouds, and I don't think that there is a grammar for ducks.

  Where would you suggest starting, with the notion of a Turing
    machine?

Actually, I was thinking of starting with notions of time and process
and communication.
↑
Mail-From: DAM created at  1-Feb-83 14:34:16
Date: Tuesday, 1 February 1983  14:34-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   Batali @ MIT-OZ
cc:   phil-sci @ MIT-OZ
Subject: Semantic Grammar


	Date: Tuesday, 1 February 1983 12:56-EST
	From: BATALI

	What I am claiming is that it is WHAT is said that is important.
	Grammar is the account of HOW it is said.

Granted meaning is important, but this is completely neutral as to what
meaning is, this does not argue against meaning having grammatical structure.
the notion of a grammar is a defined notion.  What we take grammar to
be an ACCOUNT of is up to us.

ok, meaning is important.  What is your objection to grammar?  I think
the notion of grammar all by itself is inadequate, but its a place to
start.  Where would you suggest starting, with the notion of a Turing
machine?

	David Mc
↑
Mail-From: BATALI created at  1-Feb-83 12:56:22
Date: Tuesday, 1 February 1983  12:56-EST
Sender: BATALI @ MIT-OZ
From: BATALI @ MIT-MC
To:   DAM @ MIT-OZ
Cc:   phil-sci @ MIT-OZ
Subject: Semantic Grammar
In-reply-to: The message of 1 Feb 1983  12:01-EST from DAM

    From: DAM

    	It seems to me that a more plausible theory is that the "semantic
    schema" is described by a grammar.  A communication is complete when
    a complete parsible sentence has been constructed in the grammar of
    semantics.

This is what I was arguing against.  It seems to me to be the
prevailing attitude in semantics that such an approach is correct --
but they haven't made much progress in that direction.  What I am
claiming is that it is WHAT is said that is important.  Grammar is the
account of HOW it is said.

    Semantic grammar may be only remotely related to I/O grammar,
    i.e. to natural language grammar.  Perhaps the semantic grammar is not
    so much a grammar describing strings as a case filler or frame construction
    type of thing.  Fodor's suggestion that such a semantic grammar is innate
    is (I think) a good idea.  It allows us to start working under a particular
    concrete semantic paradigm.

Maybe I just don't like the use of the term "semantic grammar".  If
you replace that term by something like "semantic analysis procedure"
or somesuch, I might be happy.  Certainly we must use methods like the
ones you describe to figure out what communication means.  But
those methods aren't the meaning.

    And given this outlook one must immediately ask "what is a concept?".

I don't know.
↑
Mail-From: DAM created at  1-Feb-83 12:24:00
Date: Tuesday, 1 February 1983  12:24-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   MINSKY @ MIT-OZ
cc:   phil-sci @ MIT-OZ
Subject: innateness, sentences


	Date: Tuesday, 1 February 1983  01:16-EST
	From: MINSKY

	I'm not sure what the (definitional tautologies) are.  What happens
	when one thinks about very small domains, e.g., a domain of, say, two
	points A and B, and statements like (Whichever X you choose, there is
	another one).  Is this an instance of a definitional tautology (since
	it would seem to be in the nature of what "two points" must be defined
	as)?

	I am in fact very interested in such "small" truths.  Understanding
these things is a good (and difficult) starting point.  How do we represnt
(in a computationally useful form) the notion of "a set of two points".
What IS this notion?  Is it a defintion is some sentential language?
Is it a Turing machine?  I find the former more plausible.  But where
did this language come from?  Is the language itself just another notion?
Is THAT notion a definition in some language?  Or perhaps the language
in which the notion of two points is expressed is actually a Turing
machine arrived at via sparseness.

	David Mc
↑
Mail-From: DAM created at  1-Feb-83 12:08:57
Date: Tuesday, 1 February 1983  12:08-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   JMC @ SU-AI
cc:   phil-sci @ MIT-OZ
Subject: narrowness


	Date: 31 Jan 83  2030 PST
	From: John McCarthy <JMC at SU-AI>

	The trouble isn't so much that some theories don't take into account
	facts that aren't pure economics or linguistics, as the case may be,
	but that the fields develop methodologies in terms of which it is seen
	as wrong to go outside.

	But these fields do produce results which we should consider
interesting.  We do not have to constrain ourselves.  Self contained
theories are important and it is important to constrain oneself if one
wants to create a self contained theory.

	David Mc
↑
Mail-From: DAM created at  1-Feb-83 12:01:29
Date: Tuesday, 1 February 1983  12:01-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   Batali @ MIT-OZ
cc:   phil-sci @ MIT-OZ
Subject: Something changes


	Date: Monday, 31 January 1983  12:11-EST
	From: BATALI

	Suppose we take as innate, not a syntactic characterization of
	communication, such as sentence, but a SEMANTIC chracterization.  That
	is: we take as primitive WHAT is said, rather than how it is said.

This (I think) is a good idea.  It does not imply that we can not ALSO study
grammar, or that grammar is not ALSO interesting.

	To do this, we need a theory of communication that tells us what
	communication is for.  Coming from the recent discussion of the
	innateness of the idea of time, let us imagine that communication is
	to inform the mind of change.  So the schematic of communication is
	expressed by the sentence "something changes".  Notice that this
	is a semantic notion, syntatctic details are, as yet, irrevelant.

	A mind would know that a complete communication has occured when it
	can fill out the schematic communication with particulars.

	It seems to me that a more plausible theory is that the "semantic
schema" is described by a grammar.  A communication is complete when
a complete parsible sentence has been constructed in the grammar of
semantics.  Semantic grammar may be only remotely related to I/O grammar,
i.e. to natural language grammar.  Perhaps the semantic grammar is not
so much a grammar describing strings as a case filler or frame construction
type of thing.  Fodor's suggestion that such a semantic grammar is innate
is (I think) a good idea.  It allows us to start working under a particular
concrete semantic paradigm.

	On this view, what must be innate is the IDEAS of change, and
	communication, and time and process and action.  Roughly, the set of
	concepts that a program would have to have to understand other
	programs, and itself.

And given this outlook one must immediately ask "what is a concept?".

	David Mc
↑
Mail-From: DAM created at  1-Feb-83 10:54:27
Date: Tuesday, 1 February 1983  10:54-EST
Sender: DAM @ MIT-OZ
From: DAM @ MIT-MC
To:   JCMa @ MIT-OZ
cc:   phil-sci @ MIT-OZ
Subject: Putnam on Chomsky, and Innateness


	Date: Monday, 31 January 1983, 23:44-EST
	From: JCMa

	In his "Reason Truth and History," Hilary Putnam only mentions Chomsky
	once.  This is it:

	"I will not discuss here the expectation aroused in some by Chomskian
	linguitics that cognitive psychology will discover *innate* algorithms
	which define rationality.  I myself think that this is an intellectual
	fashion which will be disappointed as the logical positivist hope for a
	symbolic inductive logic was disappointed."


	While the quote is quite interesting I am more concerned with
the issues at hand than with the opinions of others.  I am really more
interested in arguments than quotes of opinions (or debates about what
philosopher x said).  I agree with McCarthy that the emphasis on references
is a weakness in philosophy.

	David Mc

↑
Date: Tuesday, 1 February 1983, 10:44-EST
From: John Batali <Batali at MIT-OZ>
Subject: From an ex-biologist
To: phil-sci at MIT-OZ

I'd like to point out that JMC's "abstract" world of automatons and
their states is not as abstract as it seems.  Specifically it is
precisely the sort of thing that is going on these days in biology in
the understanding of the operations of viruses, bacteria and
sub-cellular organelles.  Look at the current theory of how ribosomes
work, or how bacteria swim.  The picture is of these very clever little
finite-state machines.  A biology-understanding program would be a great
application of the ideas in JMC's note.
↑
Mail-From: LEVITT created at  1-Feb-83 06:02:33
Date: Tuesday, 1 February 1983  06:02-EST
Sender: LEVITT @ MIT-OZ
From: LEVITT @ MIT-MC
To:   JCMa @ MIT-OZ
Cc:   phil-sci @ MIT-OZ
Subject: Putnam: Correspondence, Tarski, and Truth
In-reply-to: The message of 1 Feb 1983 04:33-EST from JCMa

I'm puzzled: why are we still discussing Truth?  I seem to remember
you (JCMa) along with most of the other participants, saying things to
the effect that absolute, universal truth doesn't exist (except
perhaps tautologically in some formal systems).  I didn't save your
message, but I remember being surprised that after disavowing belief
in the concept, you went on to argue a point about it or quote an idea
on it.  Argument by proxy is OK I guess, but if we ALL agree its the
wrong tree, why bother?

Maybe the problem is that noone argued directly enough against BATALI,
who made a distinction between scientists and engineers, claiming that
scientists seem to seek "truth".  MINSKY, KDF and others suggested
more meaningful substitutes for "true", like "reliable within certain
well-marked boundaries" or "characterizing many experiments with a
short description" or simply "useful" -- of which seemed more
structured and satisfactory.  What do we gain then from Tarski's idea
about what truth REALLY is, or a discussion by Putnam about Tarski
that begins

   The defenders of the fact-value dichotomy concede that science does
   presuppose some values, for example, science presupposes that we want
   TRUTH, but argue that these values are not ETHICAL values. . . .  

Am I the only one with this impression?  It could be that I'm just
decked by what's now a stack of vicarious arguments.  I'm still
recovering from Gavan and Hewitt arguing at length about Feyerabend
when neither of them would take Feyerabend's position.  With 2K years
of writing to survey (and I've already argued against survey courses),
could we at least retrict ourselves to arguments WE think are
understandable and plausible?  There's no drama in an discussion that
goes "defenders of X believe Y" -- it's like watching people watch TV.

JCMa -- do you think Putnam's argument is important, or are you just
satisfying a curiosity someone expressed earlier in the discussion?
↑
Date: Tuesday, 1 February 1983, 04:33-EST
From: JCMa@MIT-OZ
Subject: Putnam: Correspondence, Tarski, and Truth
To: phil-sci@MIT-OZ

These quotes are from chapter six "Fact and Value" in Hilary Putnam,
Reason, Truth, and History, (Cambridge: Cambridge University Press,
1981), pp. 127-129.  Comments and rebuttals?

"Questions in philosophy of language, epistemology, and even in
metaphysics may appear to be questions which, however interesting, are
somewhat optional from the point of view of most people's lives.  But
the question of fact and value is a forced choice question.  Any
reflective person HAS to have a real opinion upon it. . . .  If the
question of fact and value is forced choice question for reflective
people, one particular answer to that question, the answer that fact and
value are totally disjoint realms, that the dichotomy `statement of fact
OR value judgment' is an absolute one, has assumed the status of a
cultural institution. . . .

The defenders of the fact-value dichotomy concede that science does
presuppose some values, for example, science presupposes that we want
TRUTH, but argue that these values are not ETHICAL values. . . .  

The idea that truth is a passive copy of what is `really'
(mind-independently, discourse-independently) `there' has collapsed
under the critiques of Kant, Wittgenstein, and other philosophers even
if it continues to have a deep hold on our thinking. . . .

Some philosophers have appealed to the EQUIVALENCE PRINCIPLE, that is TO
SAY OF A STATEMENT THAT IT IS TRUE IS EQUIVALENT TO ASERTING THE
STATEMENT, to argue that there are no real philosophical problems about
truth.  Others appeal to the work of Alfred Tarski, the logician who
showed how, given a formal language . . . , one can define `true' FOR
THAT LANGUAGE in a stronger language (a so-called `meta-language').

Tarski's work was itself based on the equivalence principle: in fact his
criterion for a successful definition of `true' was that it should yield
all sentences of the form `P' is true if and only if P, e.g.

   (T) `Snow is white' is true if and only if snow is white

as theorems of the meta-language (where P is a sentence of the formal
notation in question).

But the equivalence principle is philosophically neutral, and so is
Tarski's work.  One ANY theory of truth, `Snow is white' is equivalent
to `"Snow is white" is true.'

Positivist philosophers would reply that if you know (T) above, you KNOW
what `"Snow is white" is true' means: it means SNOW IS WHITE.  And if
you don't understand `snow' and `white', they would add, you are in
trouble indeed!  But the problem is not that we don't understand `Snow
is white'; the problem is that we don't understand WHAT IT IS TO
UNDERSTAND `Snow is white.'  This is the philosophical problem.  About
this (T) says nothing."
↑
Date: Tuesday, 1 February 1983, 03:32-EST
From: JCMa@MIT-OZ
Subject: Re:  meta-epistemology, etc.
To: ISAACSON@USC-ISI
Cc: phil-sci@MIT-MC
In-reply-to: <[USC-ISI]31-Jan-83 20:25:30.ISAACSON>

    From: ISAACSON at USC-ISI
    Message-ID: <[USC-ISI]31-Jan-83 20:25:30.ISAACSON>
    In-Reply-To: Your message of Monday, 31 Jan 1983, 22:43-EST

    So, I don't know if I think of the fetus as "encoding" time, or
    as "manifesting" time, and I'm not sure that there is a
    substantial difference between the two views.

It seems that something which encodes must have an explicit
representation of what it encodes.  On the other hand, something which
manifests may or may not have an explicit representation.  That's a big
difference!
↑
Date: Tuesday, 1 February 1983, 03:21-EST
From: JCMa@MIT-OZ
Subject: Languages, tenses
To: LEVITT@MIT-MC
Cc: phil-sci@mc
In-reply-to: The message of 31 Jan 83 16:20-EST from LEVITT at MIT-MC

    From: LEVITT @ MIT-MC
    Subject: Languages, tenses
    In-reply-to: The message of 30 Jan 1983 15:01-EST from John C. Mallery <JCMa>

    This seems to be one of the more plausible manifestations of the
    homily "language limits thought".  It seems inevitable that the richer
    vocabulary of tenses a language has -- especially subjunctive and
    perfect tenses -- the more tractable it will be to describe complex
    plans, with concurrencies and contingencies, e.g. build a machine.

I suppose it might.  Presumably you can accomplish the same tasks either
way.  Thus, reduction in the required amount of problem solving due to
use of the syntactic approach could be more efficient; but I bet this
would only be a marginal improvement because lots of problem solving
remains to be done for all those things that aren't syntactically
reducible.

    Without this fluency it must be very hard to describe such a plan to
    someone else if help is needed.

I don't see why.  All that is needed is to develop some semantic conventions
for expressing the same things.  Of course, it would remove the need to develop
the conventions, and that might facilitate plan description, although this
wouldn't matter once the conventions were in place.  

    Could hairy syntax have been the key steppingstone that let Europe
    build its technology and dominate the world?

I doubt it.  Social organization was much more important in fostering the
industrial revolution.  Countries in which a commmercial bourgeoisie could
develop, and evolve into industrial bourgeosies, were the ones that did best.
Countries with strong oligarchies did the worst.

    Could hairy tenses also make it easier to implement a personal plan,
    to remember and describe such a plan to oneself?

Perhaps.
↑
Mail-From: GAVAN created at  1-Feb-83 03:11:05
Date: Tuesday, 1 February 1983  03:11-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   John McCarthy <JMC @ SU-AI>
Cc:   phil-sci @ MIT-OZ
Subject: criticism of coherence and consensus   
In-reply-to: The message of 31 Jan 83  2350 PST from John McCarthy <JMC at SU-AI>

    From: John McCarthy <JMC at SU-AI>

    As I remarked in my long message of 1818PST, which you may not have
    got to yet or noticed the aside to you in it, I will give my opinions
    of coherence and consensus if you give me a summary of your views,
    or if you would rather references to previous messages or the literature.

Yes, that message just came in.  I want to print it out so I can read it while
I'm on jury duty today (hopefully the prosecutor or defense attorney won't ask
me about questions of truth, proof, or evidence).  But the Dover is down again.
I'll try to print it out elsewhere tonight and respond as soon as I can.

    I have read the article on the coherence theory in the Encyclopedia of
    Philosophy, but the author of the article doesn't seem much more friendly
    to the idea than I am, so I would prefer to criticize a presentation
    by a partisan of it.

    	The index to the Encyclopedia mentions consensus only in
    connection with the  consensus gentium  argument for the existence
    of God, so I suppose the "consensus theory of truth" is due to Kuhn
    or Feyerabend or someone like that.  

The consensus theory is alluded to in Putnam's *Reason, Truth, and
History* and, as you suspected, it lurks in the background of the
debates between Kuhn, Feyerabend, et al.  The best explication of it,
however, can be found in Jurgen Habermas' "Theories of Truth."
Unfortunately, it hasn't yet been published in English.  I have a copy
of a translation by Tom McCarthy (Philosophy Department, Boston
University).  I'll try to mail you a copy within the next few days, if
I'm not sequestered.  Meanwhile, McCarthy presents a summary of
Habermas' philosophy in *The Critical Theory of Jurgen Habermas* (MIT
Press).

    I don't promise to pursue
    them very far, because, believe it or not, I am trying to cure myself
    of being a controversialist, and will go only to limited lengths in
    trying to win arguments.  

Persuasion is a difficult task.  It takes much effort, and offers few
rewards.  For me, the point is not to win arguments, but to clarify
the issues.

    The little I have read of Kuhn has left me
    with the impression that there is unlikely to be anything useful for
    AI in what he says.  I would, as it happens, find a reference to Putnam
    more interesting, and I have found my copy of volume 2.

The Kuhn-Popper-Lakatos-Feyerabend debate was under discussion on this
list because Carl Hewitt is interested in the issue for reasons
relating to his research interests.  See Hewitt and Kornfeld's MIT-AI
Lab Memo, "The Scientific Community Metaphor".  I really recommend
Putnam's *Reason, Truth, and History*.  It relates to the
Kuhn-Feyerabend debate, and also includes a lengthy critique of the
correspondence theory.  Putnam wants to replace it with a coherence
theory.  I'll argue as best I can, but Putnam might be more persuasive
for you.
↑
Date: 31 Jan 83  2350 PST
From: John McCarthy <JMC@SU-AI>
Subject: criticism of coherence and consensus   
To:   gavan@MIT-OZ
CC:   phil-sci@MIT-OZ  

As I remarked in my long message of 1818PST, which you may not have
got to yet or noticed the aside to you in it, I will give my opinions
of coherence and consensus if you give me a summary of your views,
or if you would rather references to previous messages or the literature.
I have read the article on the coherence theory in the Encyclopedia of
Philosophy, but the author of the article doesn't seem much more friendly
to the idea than I am, so I would prefer to criticize a presentation
by a partisan of it.

	The index to the Encyclopedia mentions consensus only in
connection with the  consensus gentium  argument for the existence
of God, so I suppose the "consensus theory of truth" is due to Kuhn
or Feyerabend or someone like that.  I don't promise to pursue
them very far, because, believe it or not, I am trying to cure myself
of being a controversialist, and will go only to limited lengths in
trying to win arguments.  The little I have read of Kuhn has left me
with the impression that there is unlikely to be anything useful for
AI in what he says.  I would, as it happens, find a reference to Putnam
more interesting, and I have found my copy of volume 2.

↑
Mail-From: MINSKY created at  1-Feb-83 01:16:17
Date: Tuesday, 1 February 1983  01:16-EST
Sender: MINSKY @ MIT-OZ
From: MINSKY @ MIT-MC
To:   DAM @ MIT-OZ
Cc:   phil-sci @ MIT-OZ
Subject: innateness, sentences
In-reply-to: The message of 31 Jan 1983  20:03-EST from DAM


That was very helpful.  

DAM:  The theory that it is the provable theorems of ZF set theory is
     a very useful precise theory of metamathematics.  ...An imprecise
     theory of metamathematics is that it is the set of "tautological"
     truths, whatever those are they seems to be objective, most people
     agree about definitional tautologies once they understand them.

I'm not sure what the latter are.  What happens when one thinks about
very small domains, e.g., a domain of, say, two points A and B, and
statements like (Whichever X you choose, there is another one).  Is
this an instance of a definitional tautology (since it would seem to
be in the nature of what "two points" must be defined as)?  Do you get
the same basic questions for such issues?  In other words, have you
the same concerns for small scale truths as well.  Or is it the more
general mathematical truth problem apparently no harder, so one might
as well deal with it?
↑
Date: Tuesday, 1 February 1983  01:14-EST
Sender: LEVITT @ MIT-OZ
From: LEVITT @ MIT-MC
To:   ISAACSON @ USC-ISI
Cc:   JCMa @ MIT-OZ, phil-sci @ MIT-MC
Subject: "primitive" representations of space and time
In-reply-to: The message of 31 Jan 1983  23:25-EST from ISAACSON at USC-ISI

    Date: Monday, 31 January 1983  23:25-EST
    From: ISAACSON at USC-ISI
    To:   JCMa
    cc:   phil-sci at MIT-MC, isaacson at USC-ISI
    Re:   meta-epistemology, etc.

    In-Reply-To: Your message of Monday, 31 Jan 1983, 22:43-EST
   ...  I'm tempted to say
    unfolding in TIME, as one would normally think about processes,
    but I can't bring myself to say that.  I'm stuck with the more
    primitive concept of SEQUENTIALITY as a precursor of the concept
    of time.

When people discuss the formation of spatial representations, they
don't get stuck so easily (or maybe, they have different WAYS to get
stuck) as when worrying about time, since the retina provides one
obvious anatomical picture of one "primitive" representation.  There
are probably equally specialized organs, say in the temporal lobe,
that segment time -- especially for representing periodic things
-- but we don't know much about them.  (Of course, in the ear the
basilar membrane makes rapid periodicity tractable by inventing
"pitch".)  Anyway, there's no reason we can't invent our own
"primitives" to think productively about temporal reasoning without
understanding the anatomy -- like Waltz and Evans, who didn't have to
wait for low-level vision programs to work to make great discoveries
about thinking about line drawings.  That decoupling still seems to be
the big AI "meta-breakthrough".

    So, I don't know if to think of the fetus as "encoding" time, or
    as "manifesting" time, and I'm not sure that there is a
    substantial difference between the two views.

"Manifesting time"??  To me this sounds like Heidegger, who often
surpasses Hegel as most annoying philosopher.  (In philosophical
reading, my main filter discards work that uses "being" or "existence"
as the subject of a sentence.)  What do you mean?
↑
Mail-From: GAVAN created at  1-Feb-83 00:42:37
Date: Tuesday, 1 February 1983  00:42-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   BATALI @ MIT-OZ
Cc:   John McCarthy <JMC @ SU-AI>, phil-sci @ MIT-OZ
Subject: There you don't go again, JMC.   
In-reply-to: The message of 31 Jan 1983  11:09-EST from BATALI

    From: BATALI

        From: GAVAN

        You have already defended the correspondence theory "to the max", as
        they say in California.  Yet your denials of both the consensus and
        coherence theories have not been reasoned critiques of them, but
        rather ad hominem attacks against the person presenting them.  If you
        don't like the message, criticize IT -- not the messenger.

    As one on the correspondence side, let me say that I am not against
    the coherence of the coherence view and I consent to consensus.  I
    won't criticise these views because they are right.  I won't. I won't. I
    won't. And you can't make me.

I issued the challenge to JMC, not to you, BATALI.  JMC is the one who
categorically denies the coherence theory of truth and called the
consensus theory "muddled."  Other than that he has limited his
discussions to defenses of the correspondence dogma.  He has not
bothered to present reasoned critiques of the coherence and consensus
theories.  If his adherence to the correspondence theory and his
denial of the coherence and consensus views are anything other than
irrational, dogmatic prejudices, I would like to hear his rationale.
↑
Date: Tuesday, 1 February 1983  00:35-EST
Sender: GAVAN @ MIT-OZ
From: GAVAN @ MIT-MC
To:   John C. Mallery <JCMa @ MIT-OZ>
Cc:   ISAACSON @ USC-ISI, phil-sci @ MIT-MC
Subject: Determinate Being
In-reply-to: The message of 31 Jan 1983 22:43-EST from John C. Mallery <JCMa>

    From: John C. Mallery <JCMa>

    I you will admit that a fetus is a process, then doesn't it
    implicitly encode time by its very definition as a process?

Of course if your ontology includes the notion of determinste being, 
then you believe everything is a process.  Even a brick.